url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2111/comments | https://api.github.com/repos/huggingface/datasets/issues/2111/events | https://github.com/huggingface/datasets/pull/2111 | 841,082,087 | MDExOlB1bGxSZXF1ZXN0NjAwODY4OTg5 | 2,111 | Compute WER metric iteratively | [] | closed | false | null | 7 | 2021-03-25T16:06:48Z | 2021-04-06T07:20:43Z | 2021-04-06T07:20:43Z | null | Compute WER metric iteratively to avoid MemoryError.
Fix #2078. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2111/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2111/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2111.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2111",
"merged_at": "2021-04-06T07:20:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2111.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2111"
} | true | [
"I discussed with Patrick and I think we could have a nice addition: have a parameter `concatenate_texts` that, if `True`, uses the old implementation.\r\n\r\nBy default `concatenate_texts` would be `False`, so that sentences are evaluated independently, and to save resources (the WER computation has a quadratic complexity).\r\n\r\nSome users might still want to use the old implementation.",
"@lhoestq @patrickvonplaten are you sure of the parameter name `concatenate_texts`? I was thinking about something like `iter`...",
"Not sure about the name, if you can improve it feel free to do so ^^'\r\nThe old implementation computes the WER on the concatenation of all the input texts, while the new one makes WER measures computation independent for each reference/prediction pair.\r\nThat's why I thought of `concatenate_texts`",
"@lhoestq yes, but the end user does not necessarily know the details of the implementation of the WER computation.\r\n\r\nFrom the end user perspective I think it might make more sense: how do you want to compute the metric?\r\n- all in once, more RAM memory needed?\r\n- iteratively, less RAM requirements?\r\n\r\nBecause of that I was thinking of something like `iter` or `iterative`...",
"Personally like `concatenate_texts` better since I feel like `iter` or `iterate` are quite vague",
"Therefore, you can merge... ;)",
"Ok ! merging :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1125/comments | https://api.github.com/repos/huggingface/datasets/issues/1125/events | https://github.com/huggingface/datasets/pull/1125 | 757,194,531 | MDExOlB1bGxSZXF1ZXN0NTMyNjExMDU5 | 1,125 | Add Urdu fake news dataset. | [] | closed | false | null | 3 | 2020-12-04T15:38:17Z | 2020-12-07T03:21:05Z | 2020-12-07T03:21:05Z | null | Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1125/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1125.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1125",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1125.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1125"
} | true | [
"@lhoestq looks like a lot of files were updated... shall I create a new PR?",
"Hi @chaitnayabasava ! you can try rebasing and see if that fixes the number of files changed, otherwise please do open a new PR with only the relevant files and close this one :) ",
"Created a new PR #1230.\r\nclosing this one :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3315/comments | https://api.github.com/repos/huggingface/datasets/issues/3315/events | https://github.com/huggingface/datasets/pull/3315 | 1,061,678,452 | PR_kwDODunzps4u7WpU | 3,315 | Removing query params for dynamic URL caching | [] | closed | false | null | 5 | 2021-11-23T20:24:12Z | 2021-11-25T14:44:32Z | 2021-11-25T14:44:31Z | null | The main use case for this is to make dynamically generated private URLs (like the ones returned by CommonVoice API) compatible with the datasets' caching logic.
Usage example:
```python
import datasets
class CommonVoice(datasets.GeneratorBasedBuilder):
def _info(self):
return datasets.DatasetInfo()
def _split_generators(self, dl_manager):
dl_manager.download_config.ignore_url_params = True
HUGE_URL = "https://mozilla-common-voice-datasets.s3.dualstack.us-west-2.amazonaws.com/cv-corpus-7.0-2021-07-21/cv-corpus-7.0-2021-07-21-ab.tar.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQ3GQRTO3IU5JYB5K%2F20211125%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20211125T131423Z&X-Amz-Expires=43200&X-Amz-Security-Token=FwoGZXIvYXdzEL7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDLsZw7Nj0d9h4rgheyKSBJJ6bxo1JdWLXAUhLMrUB8AXfhP8Ge4F8dtjwXmvGJgkIvdMT7P4YOEE1pS3mW8AyKsz7Z7IRVCIGQrOH1AbxGVVcDoCMMswXEOqL3nJFihKLf99%2F6l8iJVZdzftRUNgMhX5Hz0xSIL%2BzRDpH5nYa7C6YpEdOdW81CFVXybx7WUrX13wc8X4ZlUj7zrWcWf5p2VEIU5Utb7YHVi0Y5TQQiZSDoedQl0j4VmMuFkDzoobIO%2BvilgGeE2kIX0E62X423mEGNu4uQV5JsOuLAtv3GVlemsqEH3ZYrXDuxLmnvGj5HfMtySwI4vKv%2BlnnirD29o7hxvtidXiA8JMWhp93aP%2Fw7sod%2BPPbb5EqP%2B4Qb2GJ1myClOKcLEY0cqoy7XWm8NeVljLJojnFJVS5mNFBAzCCTJ%2FidxNsj8fflzkRoAzYaaPBuOTL1dgtZCdslK3FAuEvw0cik7P9A7IYiULV33otSHKMPcVfNHFsWQljs03gDztsIUWxaXvu6ck5vCcGULsHbfe6xoMPm2bR9jtKLONsslPcnzWIf7%2Fch2w%2F%2BjtTCd9IxaH4kytyJ6mIjpV%2FA%2F2h9qeDnDFsCphnMjAzPQn6tqCgTtPcyJ2b8c94ncgUnE4mepx%2FDa%2FanAEsrg9RPdmbdoPswzHn1IClh91IfSN74u95DZUxlPeZrHG5HxVCN3dKO6j%2Ft1xd20L0hEtazDdKOr8%2FYwGMirp8rp%2BII0pYOwQOrYHqH%2FREX2dRJctJtwE86Qj1eU8BAdXuFIkLC4NWXw%3D&X-Amz-Signature=1b8108d29b0e9c2bf6c7246e58ca8d5749a83de0704757ad8e8a44d78194691f&X-Amz-SignedHeaders=host"
dl_path = dl_manager.download_and_extract(HUGE_URL)
print(dl_path)
HUGE_URL += "&some_new_or_changed_param=12345"
dl_path = dl_manager.download_and_extract(HUGE_URL)
print(dl_path)
dl_manager = datasets.DownloadManager(dataset_name="common_voice")
CommonVoice()._split_generators(dl_manager)
```
Output:
```
/home/user/.cache/huggingface/datasets/downloads/6ef2a377398ff3309554be040caa78414e6562d623dbd0ce8fc262459a7f8ec6
/home/user/.cache/huggingface/datasets/downloads/6ef2a377398ff3309554be040caa78414e6562d623dbd0ce8fc262459a7f8ec6
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3315/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3315/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3315.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3315",
"merged_at": "2021-11-25T14:44:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3315.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3315"
} | true | [
"IMO it makes more sense to have `ignore_url_params` as an attribute of `DownloadConfig` to avoid defining a new argument in `DownloadManger`'s methods.",
"@mariosasko that would make sense to me too, but it seems like `DownloadConfig` wasn't intended to be modified from a dataset loading script. @lhoestq wdyt?",
"We can expose `DownloadConfig` as a property of `DownloadManager`, and then in the script before the download call we could do: `dl_manager.download_config.ignore_url_params = True`. But yes, let's hear what Quentin thinks.",
"Oh indeed that's a great idea. This parameter is similar to others like `download_config.use_etag` that defines the behavior of the download and caching, so it's better if we have it there, and expose the `download_config`",
"Implemented it via `dl_manager.download_config.ignore_url_params` now, and also added a usage example above :) "
] |
https://api.github.com/repos/huggingface/datasets/issues/6075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6075/comments | https://api.github.com/repos/huggingface/datasets/issues/6075/events | https://github.com/huggingface/datasets/issues/6075 | 1,822,341,398 | I_kwDODunzps5snrkW | 6,075 | Error loading music files using `load_dataset` | [] | closed | false | null | 2 | 2023-07-26T12:44:05Z | 2023-07-26T13:08:08Z | 2023-07-26T13:08:08Z | null | ### Describe the bug
I tried to load a music file using `datasets.load_dataset()` from the repository - https://huggingface.co/datasets/susnato/pop2piano_real_music_test
I got the following error -
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2803, in __getitem__
return self._getitem(key)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2788, in _getitem
formatted_output = format_table(
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 629, in format_table
return formatter(pa_table, query_type=query_type)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 398, in __call__
return self.format_column(pa_table)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 442, in format_column
column = self.python_features_decoder.decode_column(column, pa_table.column_names[0])
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/formatting/formatting.py", line 218, in decode_column
return self.features.decode_column(column, column_name) if self.features else column
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in decode_column
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1924, in <listcomp>
[decode_nested_example(self[column_name], value) if value is not None else None for value in column]
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/features.py", line 1325, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/datasets/features/audio.py", line 184, in decode_example
array, sampling_rate = sf.read(f)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 372, in read
with SoundFile(file, 'r', samplerate, channels,
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 740, in __init__
self._file = self._open(file, mode_int, closefd)
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1264, in _open
_error_check(_snd.sf_error(file_ptr),
File "/home/susnato/anaconda3/envs/p2p/lib/python3.9/site-packages/soundfile.py", line 1455, in _error_check
raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace'))
RuntimeError: Error opening <_io.BufferedReader name='/home/susnato/.cache/huggingface/datasets/downloads/d2b09cb974b967b13f91553297c40c0f02f3c0d4c8356350743598ff48d6f29e'>: Format not recognised.
```
### Steps to reproduce the bug
Code to reproduce the error -
```python
from datasets import load_dataset
ds = load_dataset("susnato/pop2piano_real_music_test", split="test")
print(ds[0])
```
### Expected behavior
I should be able to read the music file without any error.
### Environment info
- `datasets` version: 2.14.0
- Platform: Linux-5.19.0-50-generic-x86_64-with-glibc2.35
- Python version: 3.9.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6075/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6075/timeline | null | completed | null | null | false | [
"This code behaves as expected on my local machine or in Colab. Which version of `soundfile` do you have installed? MP3 requires `soundfile>=0.12.1`.",
"I upgraded the `soundfile` and it's working now! \r\nThanks @mariosasko for the help!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2628 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2628/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2628/comments | https://api.github.com/repos/huggingface/datasets/issues/2628/events | https://github.com/huggingface/datasets/pull/2628 | 941,676,404 | MDExOlB1bGxSZXF1ZXN0Njg3NTE0NzQz | 2,628 | Use ETag of remote data files | [] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 0 | 2021-07-12T05:10:10Z | 2021-07-12T14:08:34Z | 2021-07-12T08:40:07Z | null | Use ETag of remote data files to create config ID.
Related to #2616. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2628/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2628/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2628.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2628",
"merged_at": "2021-07-12T08:40:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2628.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2628"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1334/comments | https://api.github.com/repos/huggingface/datasets/issues/1334/events | https://github.com/huggingface/datasets/pull/1334 | 759,699,993 | MDExOlB1bGxSZXF1ZXN0NTM0NjU5MDg2 | 1,334 | Add QED Amara Dataset | [] | closed | false | null | 0 | 2020-12-08T19:01:13Z | 2020-12-10T11:17:25Z | 2020-12-10T11:15:57Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1334/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1334/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1334.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1334",
"merged_at": "2020-12-10T11:15:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1334.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1334"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/31 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/31/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/31/comments | https://api.github.com/repos/huggingface/datasets/issues/31/events | https://github.com/huggingface/datasets/pull/31 | 610,677,641 | MDExOlB1bGxSZXF1ZXN0NDEyMDczNDE4 | 31 | [Circle ci] Install a virtual env before running tests | [] | closed | false | null | 0 | 2020-05-01T10:11:17Z | 2020-05-01T22:06:16Z | 2020-05-01T22:06:15Z | null | Install a virtual env before running tests to not running into sudo issues when dynamically downloading files.
Same number of tests now pass / fail as on my local computer:

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/31/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/31/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/31.diff",
"html_url": "https://github.com/huggingface/datasets/pull/31",
"merged_at": "2020-05-01T22:06:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/31.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/31"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/594 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/594/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/594/comments | https://api.github.com/repos/huggingface/datasets/issues/594/events | https://github.com/huggingface/datasets/pull/594 | 696,816,893 | MDExOlB1bGxSZXF1ZXN0NDgyODQ1OTc5 | 594 | Fix germeval url | [] | closed | false | null | 0 | 2020-09-09T13:29:35Z | 2020-09-09T13:34:35Z | 2020-09-09T13:34:34Z | null | Continuation of #593 but without the dummy data hack | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/594/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/594/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/594.diff",
"html_url": "https://github.com/huggingface/datasets/pull/594",
"merged_at": "2020-09-09T13:34:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/594.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/594"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5088 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5088/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5088/comments | https://api.github.com/repos/huggingface/datasets/issues/5088/events | https://github.com/huggingface/datasets/issues/5088 | 1,400,530,412 | I_kwDODunzps5TemXs | 5,088 | load_datasets("json", ...) don't read local .json.gz properly | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 2 | 2022-10-07T02:16:58Z | 2022-10-07T14:43:16Z | null | null | ## Describe the bug
I have a local file `*.json.gz` and it can be read by `pandas.read_json(lines=True)`, but cannot be read by `load_datasets("json")` (resulting in 0 lines)
## Steps to reproduce the bug
```python
fpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz'
ds_panda = DatasetDict(
test=Dataset.from_pandas(
pd.read_json(fpath, lines=True)
)
)
ds_direct = load_dataset(
'json', data_files={
'test': fpath
}, features=Features(
text_input=Value(dtype="string", id=None),
text_output=Value(dtype="string", id=None)
)
)
len(ds_panda['test']), len(ds_direct['test'])
```
## Expected results
Lines of `ds_panda['test']` and `ds_direct['test']` should match.
## Actual results
```
Using custom data configuration default-c0ef2598760968aa
Downloading and preparing dataset json/default to /data/junwang/.cache/huggingface/datasets/json/default-c0ef2598760968aa/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...
Dataset json downloaded and prepared to /data/junwang/.cache/huggingface/datasets/json/default-c0ef2598760968aa/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.
(62087, 0)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Ubuntu 18.04.4 LTS
- Python version: 3.8.13
- PyArrow version: 9.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5088/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5088/timeline | null | null | null | null | false | [
"Hi @junwang-wish, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce the bug. Which version of `datasets` are you using? Does the problem persist if you update `datasets`?\r\n```shell\r\npip install -U datasets\r\n``` ",
"Thanks @albertvillanova I updated `datasets` from `2.5.1` to `2.5.2` and tested copying the `json.gz` to a different directory and my mind was blown:\r\n\r\n```python\r\nfpath = '/data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz'\r\nds_panda = DatasetDict(\r\n test=Dataset.from_pandas(\r\n pd.read_json(fpath, lines=True)\r\n )\r\n)\r\nds_direct = load_dataset(\r\n 'json', data_files={\r\n 'test': fpath\r\n }, features=Features(\r\n text_input=Value(dtype=\"string\", id=None),\r\n text_output=Value(dtype=\"string\", id=None)\r\n )\r\n)\r\nlen(ds_panda['test']), len(ds_direct['test'])\r\n```\r\nproduces \r\n```python\r\nUsing custom data configuration default-0e6cf24134163e8b\r\nFound cached dataset json (/data/junwang/.cache/huggingface/datasets/json/default-0e6cf24134163e8b/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab)\r\n(1, 0)\r\n```\r\nbut then I ran below command to see if the same file in a different directory leads to same discrepancy\r\n```shell\r\ncp /data/junwang/.cache/general/57b6f2314cbe0bc45dda5b78f0871df2/test.json.gz tmp_test.json.gz\r\n```\r\nand so I ran\r\n```python\r\nfpath = 'tmp_test.json.gz'\r\nds_panda = DatasetDict(\r\n test=Dataset.from_pandas(\r\n pd.read_json(fpath, lines=True)\r\n )\r\n)\r\nds_direct = load_dataset(\r\n 'json', data_files={\r\n 'test': fpath\r\n }, features=Features(\r\n text_input=Value(dtype=\"string\", id=None),\r\n text_output=Value(dtype=\"string\", id=None)\r\n )\r\n)\r\nlen(ds_panda['test']), len(ds_direct['test'])\r\n```\r\nand behold, I get \r\n```python\r\nUsing custom data configuration default-f679b32ab0008520\r\nDownloading and preparing dataset json/default to /data/junwang/.cache/huggingface/datasets/json/default-f679b32ab0008520/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...\r\nDataset json downloaded and prepared to /data/junwang/.cache/huggingface/datasets/json/default-f679b32ab0008520/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.\r\n(1, 1)\r\n```\r\nThey match now !\r\n\r\nThis problem happens regardless of the shell I use (VScode jupyter extension or plain old Python REPL). \r\n\r\nI attached the `json.gz` here for reference: [test.json.gz](https://github.com/huggingface/datasets/files/9734843/test.json.gz)\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3841/comments | https://api.github.com/repos/huggingface/datasets/issues/3841/events | https://github.com/huggingface/datasets/issues/3841 | 1,161,203,842 | I_kwDODunzps5FNpCC | 3,841 | Pyright reportPrivateImportUsage when `from datasets import load_dataset` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 6 | 2022-03-07T10:24:04Z | 2023-02-18T19:14:03Z | 2023-02-13T13:48:41Z | null | ## Describe the bug
Pyright complains about module not exported.
## Steps to reproduce the bug
Use an editor/IDE with Pyright Language server with default configuration:
```python
from datasets import load_dataset
```
## Expected results
No complain from Pyright
## Actual results
Pyright complain below:
```
`load_dataset` is not exported from module "datasets"
Import from "datasets.load" instead [reportPrivateImportUsage]
```
Importing from `datasets.load` does indeed solves the problem but I believe importing directly from top level `datasets` is the intended usage per the documentation.
## Environment info
- `datasets` version: 1.18.3
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.10
- PyArrow version: 7.0.0
| {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3841/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3841/timeline | null | completed | null | null | false | [
"Hi! \r\n\r\nThis issue stems from `datasets` having `py.typed` defined (see https://github.com/microsoft/pyright/discussions/3764#discussioncomment-3282142) - to avoid it, we would either have to remove `py.typed` (added to be compliant with PEP-561) or export the names with `__all__`/`from .submodule import name as name`.\r\n\r\nTransformers is fine as it no longer has `py.typed` (removed in https://github.com/huggingface/transformers/pull/18485)\r\n\r\nWDYT @lhoestq @albertvillanova @polinaeterna \r\n\r\n@sgugger's point makes sense - we should either be \"properly typed\" (have py.typed + mypy tests) or drop `py.typed` as Transformers did (I like this option better).\r\n\r\n(cc @Wauplin since `huggingface_hub` has the same issue.)",
"I'm fine with dropping it, but autotrain people won't be happy @SBrandeis ",
"> (cc @Wauplin since huggingface_hub has the same issue.)\r\n\r\nHmm maybe we have the same issue but I haven't been able to reproduce something similar to `\"load_dataset\" is not exported from module \"datasets\"` message (using VSCode+Pylance -that is powered by Pyright). `huggingface_hub` contains a `py.typed` file but the package itself is actually typed. We are running `mypy` in our CI tests since ~3 months and so far it seems to be ok. But happy to change if it causes some issues with linters.\r\n\r\nAlso the top-level [`__init__.py`](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/__init__.py) is quite different in `hfh` than `datasets` (at first glance). We have a section at the bottom to import all high level methods/classes in a `if TYPE_CHECKING` block.",
"@Wauplin I only get the error if I use Pyright's CLI tool or the Pyright extension (not sure why, but Pylance also doesn't report this issue on my machine)\r\n\r\n> Also the top-level [`__init__.py`](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/__init__.py) is quite different in `hfh` than `datasets` (at first glance). We have a section at the bottom to import all high level methods/classes in a `if TYPE_CHECKING` block.\r\n\r\nI tried to fix the issue with `TYPE_CHECKING`, but it still fails if `py.typed` is present.",
"@mariosasko thank for the tip. I have been able to reproduce the issue as well. I would be up for including a (huge) static `__all__` variable in the `__init__.py` (since the file is already generated automatically in `hfh`) but honestly I don't think it's worth the hassle. \r\n\r\nI'll delete the `py.typed` file in `huggingface_hub` to be consistent between HF libraries. I opened a PR here: https://github.com/huggingface/huggingface_hub/pull/1329",
"I am getting this error in google colab today:\r\n\r\n\r\n\r\nThe code runs just fine too."
] |
https://api.github.com/repos/huggingface/datasets/issues/223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/223/comments | https://api.github.com/repos/huggingface/datasets/issues/223/events | https://github.com/huggingface/datasets/issues/223 | 627,683,386 | MDU6SXNzdWU2Mjc2ODMzODY= | 223 | [Feature request] Add FLUE dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 3 | 2020-05-30T08:52:15Z | 2020-12-03T13:39:33Z | 2020-12-03T13:39:33Z | null | Hi,
I think it would be interesting to add the FLUE dataset for francophones or anyone wishing to work on French.
In other requests, I read that you are already working on some datasets, and I was wondering if FLUE was planned.
If it is not the case, I can provide each of the cleaned FLUE datasets (in the form of a directly exploitable dataset rather than in the original xml formats which require additional processing, with the French part for cases where the dataset is based on a multilingual dataframe, etc.). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/223/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/223/timeline | null | completed | null | null | false | [
"Hi @lbourdois, yes please share it with us",
"@mariamabarham \r\nI put all the datasets on this drive: https://1drv.ms/u/s!Ao2Rcpiny7RFinDypq7w-LbXcsx9?e=iVsEDh\r\n\r\n\r\nSome information : \r\n• For FLUE, the quote used is\r\n\r\n> @misc{le2019flaubert,\r\n> title={FlauBERT: Unsupervised Language Model Pre-training for French},\r\n> author={Hang Le and Loïc Vial and Jibril Frej and Vincent Segonne and Maximin Coavoux and Benjamin Lecouteux and Alexandre Allauzen and Benoît Crabbé and Laurent Besacier and Didier Schwab},\r\n> year={2019},\r\n> eprint={1912.05372},\r\n> archivePrefix={arXiv},\r\n> primaryClass={cs.CL}\r\n> }\r\n\r\n• The Github repo of FLUE is avaible here : https://github.com/getalp/Flaubert/tree/master/flue\r\n\r\n\r\n\r\nInformation related to the different tasks of FLUE : \r\n\r\n**1. Classification**\r\nThree dataframes are available: \r\n- Book\r\n- DVD\r\n- Music\r\nFor each of these dataframes is available a set of training and test data, and a third one containing unlabelled data.\r\n\r\nCitation : \r\n>@dataset{prettenhofer_peter_2010_3251672,\r\n author = {Prettenhofer, Peter and\r\n Stein, Benno},\r\n title = {{Webis Cross-Lingual Sentiment Dataset 2010 (Webis- \r\n CLS-10)}},\r\n month = jul,\r\n year = 2010,\r\n publisher = {Zenodo},\r\n doi = {10.5281/zenodo.3251672},\r\n url = {https://doi.org/10.5281/zenodo.3251672}\r\n}\r\n\r\n\r\n**2. Paraphrasing** \r\nFrench part of the PAWS-X dataset (https://github.com/google-research-datasets/paws).\r\nThree dataframes are available: \r\n- train\r\n- dev\r\n- test \r\n\r\nCitation : \r\n> @InProceedings{pawsx2019emnlp,\r\n> title = {{PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification}},\r\n> author = {Yang, Yinfei and Zhang, Yuan and Tar, Chris and Baldridge, Jason},\r\n> booktitle = {Proc. of EMNLP},\r\n> year = {2019}\r\n> }\r\n\r\n\r\n\r\n**3. Natural Language Inference**\r\nFrench part of the XNLI dataset (https://github.com/facebookresearch/XNLI).\r\nThree dataframes are available: \r\n- train\r\n- dev\r\n- test \r\n\r\nFor the dev and test datasets, extra columns compared to the train dataset were available so I left them in the dataframe (I didn't know if these columns could be useful for other tasks or not). \r\nIn the context of the FLUE benchmark, only the columns gold_label, sentence1 and sentence2 are useful.\r\n\r\n\r\nCitation : \r\n\r\n> @InProceedings{conneau2018xnli,\r\n> author = \"Conneau, Alexis\r\n> and Rinott, Ruty\r\n> and Lample, Guillaume\r\n> and Williams, Adina\r\n> and Bowman, Samuel R.\r\n> and Schwenk, Holger\r\n> and Stoyanov, Veselin\",\r\n> title = \"XNLI: Evaluating Cross-lingual Sentence Representations\",\r\n> booktitle = \"Proceedings of the 2018 Conference on Empirical Methods\r\n> in Natural Language Processing\",\r\n> year = \"2018\",\r\n> publisher = \"Association for Computational Linguistics\",\r\n> location = \"Brussels, Belgium\",\r\n\r\n\r\n**4. Parsing**\r\nThe dataset used by the FLUE authors for this task is not freely available.\r\nUsers of your library will therefore not be able to access it.\r\nNevertheless, I think maybe it is useful to add a link to the site where to request this dataframe: http://ftb.linguist.univ-paris-diderot.fr/telecharger.php?langue=en \r\n(personally it was sent to me less than 48 hours after I requested it).\r\n\r\n\r\n**5. Word Sense Disambiguation Tasks**\r\n5.1 Verb Sense Disambiguation\r\n\r\nTwo dataframes are available: train and test\r\nFor both dataframes, 4 columns are available: document, sentence, lemma and word.\r\nI created the document column thinking that there were several documents in the dataset but afterwards it turns out that there were not: several sentences but only one document. It's up to you to keep it or not when importing these two dataframes.\r\n\r\nThe sentence column is used to determine to which sentence the word in the word column belongs. It is in the form of a dictionary {'id': 'd000.s001', 'idx': '1'}. I thought for a while to keep only the idx because the id doesn't matter any more information. Nevertheless for the test dataset, the dictionary has an extra value indicating the source of the sentence. I don't know if it's useful or not, that's why I left the dictionary just in case. The user is free to do what he wants with it.\r\n\r\nCitation : \r\n\r\n> Segonne, V., Candito, M., and Crabb ́e, B. (2019). Usingwiktionary as a resource for wsd: the case of frenchverbs. InProceedings of the 13th International Confer-ence on Computational Semantics-Long Papers, pages259–270\r\n\r\n5.2 Noun Sense Disambiguation\r\nTwo dataframes are available: 2 train and 1 test\r\n\r\nI confess I didn't fully understand the procedure for this task.\r\n\r\nCitation : \r\n\r\n> @dataset{loic_vial_2019_3549806,\r\n> author = {Loïc Vial},\r\n> title = {{French Word Sense Disambiguation with Princeton \r\n> WordNet Identifiers}},\r\n> month = nov,\r\n> year = 2019,\r\n> publisher = {Zenodo},\r\n> version = {1.0},\r\n> doi = {10.5281/zenodo.3549806},\r\n> url = {https://doi.org/10.5281/zenodo.3549806}\r\n> }\r\n\r\nFinally, additional information about FLUE is available in the FlauBERT publication : \r\nhttps://arxiv.org/abs/1912.05372 (p. 4).\r\n\r\n\r\nHoping to have provided you with everything you need to add this benchmark :) \r\n",
"https://github.com/huggingface/datasets/pull/943"
] |
https://api.github.com/repos/huggingface/datasets/issues/1518 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1518/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1518/comments | https://api.github.com/repos/huggingface/datasets/issues/1518/events | https://github.com/huggingface/datasets/pull/1518 | 764,045,722 | MDExOlB1bGxSZXF1ZXN0NTM4MzAyNzYy | 1,518 | Add twi text | [] | closed | false | null | 2 | 2020-12-12T16:52:02Z | 2020-12-13T18:53:37Z | 2020-12-13T18:53:37Z | null | Add Twi texts | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1518/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1518/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1518.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1518",
"merged_at": "2020-12-13T18:53:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1518.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1518"
} | true | [
"Hii please follow me",
"thank you"
] |
https://api.github.com/repos/huggingface/datasets/issues/4267 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4267/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4267/comments | https://api.github.com/repos/huggingface/datasets/issues/4267/events | https://github.com/huggingface/datasets/pull/4267 | 1,223,214,275 | PR_kwDODunzps43LzOR | 4,267 | Replace data URL in SAMSum dataset within the same repository | [] | closed | false | null | 1 | 2022-05-02T18:38:08Z | 2022-05-06T08:38:13Z | 2022-05-02T19:03:49Z | null | Replace data URL with one in the same repository. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4267/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4267/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4267.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4267",
"merged_at": "2022-05-02T19:03:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4267.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4267"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1418/comments | https://api.github.com/repos/huggingface/datasets/issues/1418/events | https://github.com/huggingface/datasets/pull/1418 | 760,672,320 | MDExOlB1bGxSZXF1ZXN0NTM1NDY0NzQ4 | 1,418 | Add arabic dialects | [] | closed | false | null | 1 | 2020-12-09T21:06:07Z | 2020-12-17T09:40:56Z | 2020-12-17T09:40:56Z | null | Data loading script and dataset card for Dialectal Arabic Resources dataset.
Fixed git issues from PR #976 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1418/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1418/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1418.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1418",
"merged_at": "2020-12-17T09:40:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1418.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1418"
} | true | [
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/136/comments | https://api.github.com/repos/huggingface/datasets/issues/136/events | https://github.com/huggingface/datasets/pull/136 | 619,211,018 | MDExOlB1bGxSZXF1ZXN0NDE4NzgxNzI4 | 136 | Update README.md | [] | closed | false | null | 1 | 2020-05-15T20:01:07Z | 2020-05-17T12:17:28Z | 2020-05-17T12:17:28Z | null | small typo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/136/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/136/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/136.diff",
"html_url": "https://github.com/huggingface/datasets/pull/136",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/136.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/136"
} | true | [
"Thanks, this was fixed with #135 :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2916/comments | https://api.github.com/repos/huggingface/datasets/issues/2916/events | https://github.com/huggingface/datasets/pull/2916 | 997,003,661 | PR_kwDODunzps4rx5ua | 2,916 | Add OpenAI's pass@k code evaluation metric | [] | closed | false | null | 4 | 2021-09-15T12:05:43Z | 2021-11-12T14:19:51Z | 2021-11-12T14:19:50Z | null | This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references` convention.
The addition of this metric should enable the evaluation against the code evaluation datasets added in #2897 and #2893.
A few open questions:
- The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in `datasets`?
- This metric executes generated Python code and as such it poses dangers of executing malicious code. OpenAI addresses this issue by 1) commenting the `exec` call in the code so the user has to actively uncomment it and read the warning and 2) suggests using a sandbox environment (gVisor container). Should we add a similar safeguard? E.g. a prompt that needs to be answered when initialising the metric? Or at least a warning message?
- Naming: the implementation sticks to the `predictions`/`references` naming, however, the references are not reference solutions but unittest to test the solution. While reference solutions are also available they are not used. Should the naming be adapted? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2916/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2916/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2916.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2916",
"merged_at": "2021-11-12T14:19:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2916.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2916"
} | true | [
"> The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in datasets?\r\n\r\nIt should work normally, but feel free to test it.\r\nThere is some documentation about using metrics in a distributed setup that uses multiprocessing [here](https://huggingface.co/docs/datasets/loading.html?highlight=rank#distributed-setup)\r\nYou can test to spawn several processes where each process would load the metric. Then in each process you add some references/predictions to the metric. Finally you call compute() in each process and on process 0 it should return the result on all the references/predictions\r\n\r\nLet me know if you have questions or if I can help",
"Is there a good way to debug the Windows tests? I suspect it is an issue with `multiprocessing`, but I can't see the error messages.",
"Indeed it has an issue on windows.\r\nIn your example it's supposed to output\r\n```python\r\n{'pass@1': 0.5, 'pass@2': 1.0}\r\n```\r\nbut it gets\r\n```python\r\n{'pass@1': 0.0, 'pass@2': 0.0}\r\n```\r\n\r\nI'm not on my windows machine today so I can't take a look at it. I can dive into it early next week if you want",
"> I'm not on my windows machine today so I can't take a look at it. I can dive into it early next week if you want\r\n\r\nThat would be great - unfortunately I have no access to a windows machine at the moment. I am quite sure it is an issue with in exectue.py because of multiprocessing.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1078 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1078/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1078/comments | https://api.github.com/repos/huggingface/datasets/issues/1078/events | https://github.com/huggingface/datasets/pull/1078 | 756,633,215 | MDExOlB1bGxSZXF1ZXN0NTMyMTUyMzgx | 1,078 | add AJGT dataset | [] | closed | false | null | 0 | 2020-12-03T22:16:31Z | 2020-12-04T09:55:15Z | 2020-12-04T09:55:15Z | null | Arabic Jordanian General Tweets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1078/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1078/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1078.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1078",
"merged_at": "2020-12-04T09:55:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1078.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1078"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5145/comments | https://api.github.com/repos/huggingface/datasets/issues/5145/events | https://github.com/huggingface/datasets/issues/5145 | 1,418,005,452 | I_kwDODunzps5UhQvM | 5,145 | Dataset order is not deterministic with ZIP archives and `iter_files` | [] | closed | false | null | 8 | 2022-10-21T09:00:03Z | 2022-10-27T09:51:49Z | 2022-10-27T09:51:10Z | null | ### Describe the bug
For the `beans` dataset (did not try on other), the order of samples is not the same on different machines. Tested on my local laptop, github actions machine, and ec2 instance. The three yield a different order.
### Steps to reproduce the bug
In a clean docker container or conda environment with datasets==2.6.1, run
```python
from datasets import load_dataset
from pprint import pprint
data = load_dataset("beans", split="validation")
pprint(data["image_file_path"])
```
### Expected behavior
The order of the images is the same on all machines.
### Environment info
On the EC2 instance:
```
- `datasets` version: 2.6.1
- Platform: Linux-4.14.291-218.527.amzn2.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.7.10
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
- Numpy version: not checked
```
On my local laptop:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- Numpy version: 1.23.1
```
On github actions:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.15.0-1022-azure-x86_64-with-glibc2.2.5
- Python version: 3.8.14
- PyArrow version: 9.0.0
- Pandas version: 1.5.1
- Numpy version: 1.23.4
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5145/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5145/timeline | null | completed | null | null | false | [
"Thanks for reporting ! The issue doesn't come from shuffling, but from `beans` row order not being deterministic:\r\n\r\nhttps://huggingface.co/datasets/beans/blob/main/beans.py uses `dl_manager.iter_files` on ZIP archives and the file order doesn't seen to be deterministic and changes across machines",
"Thank you for noticing indeed!",
"This is still a bug, so I'd keep this one open if you don't mind ;)",
"Besides the linked PR, to make the loading process fully deterministic, I believe we should also sort the data files [here](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/data_files.py#L276) and [here](https://github.com/huggingface/datasets/blob/df4bdd365f2abb695f113cbf8856a925bc70901b/src/datasets/data_files.py#L485) (e.g. fsspec's `LocalFileSystem.glob` relies on `os.scandir`, which yields the contents in arbitrary order). My concern is the overhead of these sorts... Maybe we could introduce a new flag to `load_dataset` similar to TFDS' [`shuffle_files`](https://www.tensorflow.org/datasets/determinism#determinism_when_reading) or sort only if the number of data files is small?",
"We already return the result sorted at the end of `_resolve_single_pattern_locally` and `_resolve_single_pattern_in_dataset_repository` if I'm not mistaken",
"@lhoestq Oh, you are right. Feel free to ignore my comment.",
"I think the corresponding PR is ready to be merged :hugs: ",
"@albertvillanova Thanks for the fix!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2187/comments | https://api.github.com/repos/huggingface/datasets/issues/2187/events | https://github.com/huggingface/datasets/issues/2187 | 852,939,736 | MDU6SXNzdWU4NTI5Mzk3MzY= | 2,187 | Question (potential issue?) related to datasets caching | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | open | false | null | 15 | 2021-04-08T00:16:28Z | 2023-01-03T18:30:38Z | null | null | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877
04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93
```
Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2187/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2187/timeline | null | null | null | null | false | [
"An educated guess: does this refer to the fact that depending on the custom column names in the dataset files (csv in this case), there is a dataset loader being created? and this dataset loader - using the \"custom data configuration\" is used among all jobs running using this particular csv files? (thinking out loud here...)\r\n\r\nIf this is the case, it may be ok for my use case (have to think about it more), still a bit surprising given that datasets caching is disabled (or so I hope) by the lines I pasted above. ",
"Hi ! Currently disabling the caching means that all the dataset transform like `map`, `filter` etc. ignore the cache: it doesn't write nor read processed cache files.\r\nHowever `load_dataset` reuses datasets that have already been prepared: it does reload prepared dataset files.\r\n\r\nIndeed from the documentation:\r\n> datasets.set_caching_enabled(boolean: bool)\r\n\r\n> When applying transforms on a dataset, the data are stored in cache files. The caching mechanism allows to reload an existing cache file if it’s already been computed.\r\n> Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated after each transform.\r\n> If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. More precisely, if the caching is disabled:\r\n> - cache files are always recreated\r\n> - cache files are written to a temporary directory that is deleted when session closes\r\n> - cache files are named using a random hash instead of the dataset fingerprint - use datasets.Dataset.save_to_disk() to save a transformed dataset or it will be deleted when session closes\r\n> - caching doesn’t affect datasets.load_dataset(). If you want to regenerate a dataset from scratch you should use the download_mode parameter in datasets.load_dataset().",
"Thank you for the clarification. \r\n\r\nThis is a bit confusing. On one hand, it says that cache files are always recreated and written to a temporary directory that is removed; on the other hand the last bullet point makes me think that since the default according to the docs for `download_mode (Optional datasets.GenerateMode) – select the download/generate mode - Default to REUSE_DATASET_IF_EXISTS` => it almost sounds that it could reload prepared dataset files. Where are these files stored? I guess not in the temporary directory that is removed... \r\n\r\nI find this type of api design error-prone. When I see as a programmer `datasets.set_caching_enabled(False)` I expect no reuse of anything in the cache. ",
"It would be nice if the documentation elaborated on all the possible values for `download_mode` and/or a link to `datasets.GenerateMode`. \r\nThis info here:\r\n```\r\n \"\"\"`Enum` for how to treat pre-existing downloads and data.\r\n The default mode is `REUSE_DATASET_IF_EXISTS`, which will reuse both\r\n raw downloads and the prepared dataset if they exist.\r\n The generations modes:\r\n | | Downloads | Dataset |\r\n | -----------------------------------|-----------|---------|\r\n | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |\r\n | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |\r\n | `FORCE_REDOWNLOAD` | Fresh | Fresh |\r\n```",
"I have another question. Assuming that I understood correctly and there is reuse of datasets files when caching is disabled (!), I'm guessing there is a directory that is created based on some information on the dataset file. I'm interested in the situation where I'm loading a (custom) dataset from local disk. What information is used to create the directory/filenames where the files are stored?\r\n\r\nI'm concerned about the following scenario: if I have a file, let's say `train.csv` at path `the_path`, run once, the dataset is prepared, some models are run, etc. Now let's say there is an issue and I recreate `train.csv` at the same path `the_path`. Is there enough information in the temporary name/hash to *not* reload the *old* prepared dataset (e.g., timestamp of the file)? Or is it going to reload the *old* prepared file? ",
"Thanks for the feedback, we'll work in improving this aspect of the documentation.\r\n\r\n> Where are these files stored? I guess not in the temporary directory that is removed...\r\n\r\nWe're using the Arrow file format to load datasets. Therefore each time you load a dataset, it is prepared as an arrow file on your disk. By default the file is located in the ~/.cache/huggingface/datasets/<dataset_name>/<config_id>/<version> directory.\r\n\r\n> What information is used to create the directory/filenames where the files are stored?\r\n\r\nThe config_id contains a hash that takes into account:\r\n- the dataset loader used and its source code (e.g. the \"csv\" loader)\r\n- the arguments passed to the loader (e.g. the csv delimiter)\r\n- metadata of the local data files if any (e.g. their timestamps)\r\n\r\n> I'm concerned about the following scenario: if I have a file, let's say train.csv at path the_path, run once, the dataset is prepared, some models are run, etc. Now let's say there is an issue and I recreate train.csv at the same path the_path. Is there enough information in the temporary name/hash to not reload the old prepared dataset (e.g., timestamp of the file)? Or is it going to reload the old prepared file?\r\n\r\nYes the timestamp of the local csv file is taken into account. If you edit your csv file, the config_id will change and loading the dataset will create a new arrow file.",
"Thank you for all your clarifications, really helpful! \r\n\r\nIf you have the bandwidth, please do revisit the api wrt cache disabling. Anywhere in the computer stack (hardware included) where you disable the cache, one assumes there is no caching that happens. ",
"That makes total sense indeed !\r\nI think we can do the change",
"I have another question about caching, this time in the case where FORCE_REDOWNLOAD is used to load the dataset, the datasets cache is one directory as defined by HF_HOME and there are multiple concurrent jobs running in a cluster using the same local dataset (i.e., same local files in the cluster). Does anything in the naming convention and/or file access/locking that you're using prevent race conditions between the concurrent jobs on the caching of the local dataset they all use?\r\n\r\nI noticed some errors (can provide more details if helpful) in load_dataset/prepare_split that lead to my question above. \r\n\r\nLet me know if my question is clear, I can elaborate more if needed @lhoestq Thank you!",
"I got another error that convinces me there is a race condition (one of the test files had zero samples at prediction time). I think it comes down to the fact that the `config_id` above (used in the naming for the cache) has no information on who's touching the data. If I have 2 concurrent jobs, both loading the same dataset and forcing redownload, they may step on each other foot/caching of the dataset. ",
"We're using a locking mechanism to prevent two processes from writing at the same time. The locking is based on the `filelock` module.\r\nAlso directories that are being written use a suffix \".incomplete\" so that reading is not possible on a dataset being written.\r\n\r\nDo you think you could provide a simple code to reproduce the race condition you experienced ?",
"I can provide details about the code I'm running (it's really-really close to some official samples from the huggingface transformers examples, I can point to the exact sample file, I kept a record of that). I can also describe in which conditions this race occurs (I'm convinced it has to do with forcing the redownloading of the dataset, I've been running hundreds of experiments before and didn't have a problem before I forced the redownload). I also can provide samples of the different stack errors I get and some details about the level of concurrency of jobs I was running. I can also try to imagine how the race manifests (I'm fairly sure that it's a combo of one job cleaning up and another job being in the middle of the run).\r\n\r\nHowever, I have to cleanup all this to make sure I'm no spilling any info I shouldn't be spilling. I'll try to do it by the end of the week, if you think all this is helpful. \r\n\r\nFor now, I have a workaround. Don't use forcing redownloading. And to be ultra careful (although I don't think this is a problem), I run a series of jobs that will prepare the datasets and I know there is no concurrency wrt the dataset. Once that's done (and I believe even having multiple jobs loading the datasets at the same time doesn't create problems, as long as REUSE_DATASET_IF_EXISTS is the policy for loading the dataset, so the filelock mechanism you're using is working in that scenario), the prepared datasets will be reused, no race possible in any way. \r\n\r\nThanks for all the details you provided, it helped me understand the underlying implementation and coming up with workarounds when I ran into issues. ",
"Hi! I have the same challenge with caching, where the **.cache** folder is required even though it isn't possible for me.\r\n\r\nI'd like to run transformers in Snowflake, using Snowpark for Python, this would mean I could provide configurable transformers in real-time for business users without having data leave an environment (for security reasons). With no need for data transfer,n the compute is faster. It is a large use case - is it possible to entirely disable caching in certain scenarios?\r\n@lhoestq ?\r\n",
"You can try to change the location of the cache folder using the `HF_CACHE_HOME` environment variable, and set a location where you have read/write access.",
"Thanks @lhoestq \r\n\r\nI wanted to do that, however, snowflake does not allow it to write at all. I'm asking around to see if they can help me out with that issue 😅"
] |
https://api.github.com/repos/huggingface/datasets/issues/6003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6003/comments | https://api.github.com/repos/huggingface/datasets/issues/6003/events | https://github.com/huggingface/datasets/issues/6003 | 1,786,554,110 | I_kwDODunzps5qfKb- | 6,003 | interleave_datasets & DataCollatorForLanguageModeling having a conflict ? | [] | open | false | null | 0 | 2023-07-03T17:15:31Z | 2023-07-03T17:15:31Z | null | null | ### Describe the bug
Hi everyone :)
I have two local & custom datasets (1 "sentence" per line) which I split along the 95/5 lines for pre-training a Bert model. I use a modified version of `run_mlm.py` in order to be able to make use of `interleave_dataset`:
- `tokenize()` runs fine
- `group_text()` runs fine
Everytime, on step 19, I get
```pytb
File "env/lib/python3.9/site-packages/transformers/data/data_collator.py", line 779, in torch_mask_tokens
inputs[indices_random] = random_words[indices_random]
RuntimeError: Index put requires the source and destination dtypes match, got Float for the destination and Long for the source.
```
I tried:
- training without interleave on dataset 1, it runs
- training without interleave on dataset 2, it runs
- training without `.to_iterable_dataset()`, it hangs then crash
- training without group_text() and padding to max_length seemed to fix the issue, but who knows if this was just because it was an issue that would come much later in terms of steps.
I might have coded something wrong, but I don't get what
### Steps to reproduce the bug
I have this function:
```py
def build_dataset(path: str, percent: str):
dataset = load_dataset(
"text",
data_files={"train": [path]},
split=f"train[{percent}]"
)
dataset = dataset.map(
lambda examples: tokenize(examples["text"]),
batched=True,
num_proc=num_proc,
)
dataset = dataset.map(
group_texts,
batched=True,
num_proc=num_proc,
desc=f"Grouping texts in chunks of {tokenizer.max_seq_length}",
remove_columns=["text"]
)
print(len(dataset))
return dataset.to_iterable_dataset()
```
I hardcoded group_text:
```py
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, and if the total_length < max_seq_length we exclude this batch and return an empty dict.
# We could add padding if the model supported it instead of this drop, you can customize this part to your needs.
total_length = (total_length // 512) * 512
# Split by chunks of max_len.
result = {
k: [t[i: i + 512] for i in range(0, total_length, 512)]
for k, t in concatenated_examples.items()
}
# result = {k: [el for el in elements if el] for k, elements in result.items()}
return result
```
And then I build datasets using the following code:
```py
train1 = build_dataset("d1.txt", ":95%")
train2 = build_dataset("d2.txt", ":95%")
dev1 = build_dataset("d1.txt", "95%:")
dev2 = build_dataset("d2.txt", "95%:")
```
and finally I run
```py
train_dataset = interleave_datasets(
[train1, train2],
probabilities=[0.8, 0.2],
seed=42
)
eval_dataset = interleave_datasets(
[dev1, dev2],
probabilities=[0.8, 0.2],
seed=42
)
```
Then I run the training part which remains mostly untouched:
> CUDA_VISIBLE_DEVICES=1 python custom_dataset.py --model_type bert --per_device_train_batch_size 32 --do_train --output_dir /var/mlm/training-bert/model --max_seq_length 512 --save_steps 10000 --save_total_limit 3 --auto_find_batch_size --logging_dir ./logs-bert --learning_rate 0.0001 --do_train --num_train_epochs 25 --warmup_steps 10000 --max_step 45000 --fp16
### Expected behavior
The model should then train normally, but fails every time at the same step (19).
printing the variables at `inputs[indices_random] = random_words[indices_random]` shows a magnificient empty tensor (, 32) [if I remember well]
### Environment info
transformers[torch] 4.30.2
Ubuntu
A100 0 CUDA 12
Driver Version: 525.116.04 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6003/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6003/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/217/comments | https://api.github.com/repos/huggingface/datasets/issues/217/events | https://github.com/huggingface/datasets/issues/217 | 627,128,403 | MDU6SXNzdWU2MjcxMjg0MDM= | 217 | Multi-task dataset mixing | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | 26 | 2020-05-29T09:22:26Z | 2022-10-22T00:45:50Z | null | null | It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks).
The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning:
- **Examples-proportional mixing** - sample from tasks proportionally to their dataset size
- **Equal mixing** - sample uniformly from each task
- **Temperature-scaled mixing** - The generalized approach used by multilingual BERT which uses a temperature T, where the mixing rate of each task is raised to the power 1/T and renormalized. When T=1 this is equivalent to equal mixing, and becomes closer to equal mixing with increasing T.
Following this discussion https://github.com/huggingface/transformers/issues/4340 in [transformers](https://github.com/huggingface/transformers), @enzoampil suggested that the `nlp` library might be a better place for this functionality.
Some method for combining datasets could be implemented ,e.g.
```
dataset = nlp.load_multitask(['squad','imdb','cnn_dm'], temperature=2.0, ...)
```
We would need a few additions:
- Method of identifying the tasks - how can we support adding a string to each task as an identifier: e.g. 'summarisation: '?
- Method of combining the metrics - a standard approach is to use the specific metric for each task and add them together for a combined score.
It would be great to support common use cases such as pretraining on the GLUE benchmark before fine-tuning on each GLUE task in turn.
I'm willing to write bits/most of this I just need some guidance on the interface and other library details so I can integrate it properly.
| {
"+1": 12,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 12,
"url": "https://api.github.com/repos/huggingface/datasets/issues/217/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/217/timeline | null | null | null | null | false | [
"I like this feature! I think the first question we should decide on is how to convert all datasets into the same format. In T5, the authors decided to format every dataset into a text-to-text format. If the dataset had \"multiple\" inputs like MNLI, the inputs were concatenated. So in MNLI the input:\r\n\r\n> - **Hypothesis**: The St. Louis Cardinals have always won.\r\n> \r\n> - **Premise**: yeah well losing is i mean i’m i’m originally from Saint Louis and Saint Louis Cardinals when they were there were uh a mostly a losing team but \r\n\r\nwas flattened to a single input:\r\n\r\n> mnli hypothesis: The St. Louis Cardinals have always won. premise:\r\n> yeah well losing is i mean i’m i’m originally from Saint Louis and Saint Louis Cardinals\r\n> when they were there were uh a mostly a losing team but.\r\n\r\nThis flattening is actually a very simple operation in `nlp` already. You would just need to do the following:\r\n\r\n```python \r\ndef flatten_inputs(example):\r\n return {\"input\": \"mnli hypothesis: \" + example['hypothesis'] + \" premise: \" + example['premise']}\r\n\r\nt5_ready_mnli_ds = mnli_ds.map(flatten_inputs, remove_columns=[<all columns except output>])\r\n```\r\n\r\nSo I guess converting the datasets into the same format can be left to the user for now. \r\nThen the question is how we can merge the datasets. I would probably be in favor of a simple \r\n\r\n```python \r\ndataset.add()\r\n```\r\n\r\nfunction that checks if the dataset is of the same format and if yes merges the two datasets. Finally, how should the sampling be implemented? **Examples-proportional mixing** corresponds to just merging the datasets and shuffling. For the other two sampling approaches we would need some higher-level features, maybe even a `dataset.sample()` function for merged datasets. \r\n\r\nWhat are your thoughts on this @thomwolf @lhoestq @ghomasHudson @enzoampil ?",
"I agree that we should leave the flattening of the dataset to the user for now. Especially because although the T5 framing seems obvious, there are slight variations on how the T5 authors do it in comparison to other approaches such as gpt-3 and decaNLP.\r\n\r\nIn terms of sampling, Examples-proportional mixing does seem the simplest to implement so would probably be a good starting point.\r\n\r\nTemperature-scaled mixing would probably most useful, offering flexibility as it can simulate the other 2 methods by setting the temperature parameter. There is a [relevant part of the T5 repo](https://github.com/google-research/text-to-text-transfer-transformer/blob/03c94165a7d52e4f7230e5944a0541d8c5710788/t5/data/utils.py#L889-L1118) which should help with implementation.\r\n\r\nAccording to the T5 authors, equal-mixing performs worst. Among the other two methods, tuning the K value (the artificial dataset size limit) has a large impact.\r\n",
"I agree with going with temperature-scaled mixing for its flexibility!\r\n\r\nFor the function that combines the datasets, I also find `dataset.add()` okay while also considering that users may want it to be easy to combine a list of say 10 data sources in one go.\r\n\r\n`dataset.sample()` should also be good. By the looks of it, we're planning to have as main parameters: `temperature`, and `K`.\r\n\r\nOn converting the datasets to the same format, I agree that we can leave these to the users for now. But, I do imagine it'd be an awesome feature for the future to have this automatically handled, based on a chosen *approach* to formatting :smile: \r\n\r\nE.g. T5, GPT-3, decaNLP, original raw formatting, or a contributed way of formatting in text-to-text. ",
"This is an interesting discussion indeed and it would be nice to make multi-task easier.\r\n\r\nProbably the best would be to have a new type of dataset especially designed for that in order to easily combine and sample from the multiple datasets.\r\n\r\nThis way we could probably handle the combination of datasets with differing schemas as well (unlike T5).",
"@thomwolf Are you suggesting making a wrapper class which can take existing datasets as arguments and do all the required sampling/combining, to present the same interface as a normal dataset?\r\n\r\nThat doesn't seem too complicated to implement.\r\n",
"I guess we're looking at the end user writing something like:\r\n``` python\r\nds = nlp.load_dataset('multitask-t5',datasets=[\"squad\",\"cnn_dm\",...], k=1000, t=2.0)\r\n```\r\nUsing the t5 method of combining here (or this could be a function passed in as an arg) \r\n\r\nPassing kwargs to each 'sub-dataset' might become tricky.",
"From thinking upon @thomwolf 's suggestion, I've started experimenting:\r\n```python\r\nclass MultitaskDataset(DatasetBuilder):\r\n def __init__(self, *args, **kwargs):\r\n super(MultitaskDataset, self).__init__(*args, **kwargs)\r\n self._datasets = kwargs.get(\"datasets\")\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=nlp.Features({\r\n \"source\": nlp.Value(\"string\"),\r\n \"target\": nlp.Sequence(nlp.Value(\"string\"))\r\n })\r\n )\r\n\r\n def _get_common_splits(self):\r\n '''Finds the common splits present in all self._datasets'''\r\n min_set = None\r\n for dataset in self._datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n....\r\n\r\n# Maybe this?:\r\nsquad = nlp.load_dataset(\"squad\")\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\",\"3.0.0\")\r\nmultitask_dataset = nlp.load_dataset(\r\n 'multitask_dataset',\r\n datasets=[squad,cnn_dailymail], \r\n k=1000, \r\n t=2.0\r\n)\r\n\r\n```\r\n\r\nDoes anyone know what methods of `MultitaskDataset` I would need to implement? Maybe `as_dataset` and `download_and_prepare`? Most of these should be just calling the methods of the sub-datasets. \r\n\r\nI'm assuming DatasetBuilder is better than the more specific `GeneratorBasedBuilder`, `BeamBasedBuilder`, etc....\r\n\r\nOne of the other problems is that the dataset size is unknown till you construct it (as you can pick the sub-datasets). Am hoping not to need to make changes to `nlp.load_dataset` just for this class.\r\n\r\nI'd appreciate it if anyone more familiar with nlp's internal workings could tell me if I'm on the right track!",
"I think I would probably go for a `MultiDataset` wrapper around a list of `Dataset`.\r\n\r\nI'm not sure we need to give it `k` and `t` parameters at creation, it can maybe be something along the lines of:\r\n```python\r\nsquad = nlp.load_dataset(\"squad\")\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\",\"3.0.0\")\r\n\r\nmultitask_dataset = nlp.MultiDataset(squad, cnn_dm)\r\n\r\nbatch = multitask_dataset.sample(10, temperature=2.0, k=1000)\r\n```\r\n\r\nThe first proof-of-concept for multi-task datasets could definitely require that the provided datasets have the same name/type for columns (if needed you easily rename/cast a column prior to instantiating the `MultiDataset`).\r\n\r\nIt's good to think about it for some time though and don't overfit too much on the T5 examples (in particular for the ways/kwargs for sampling among datasets).",
"The problem with changing `k` and `t` per sampling is that you'd have to somehow remember which examples you'd already returned while re-weighting the remaining examples based on the new `k` and `t`values. It seems possible but complicated (I can't really see a reason why you'd want to change the weighting of datasets after you constructed the multidataset).\r\n\r\nWouldn't it be convenient if it implemented the dataset interface? Then if someone has code using a single nlp dataset, they can replace it with a multitask combination of more datasets without having to change other code. We would at least need to be able to pass it into a `DataLoader`.\r\n\r\n",
"A very janky (but working) implementation of `multitask_dataset.sample()` could be something like this:\r\n```python\r\nimport nlp\r\nimport torch\r\n\r\nclass MultiDataset():\r\n def __init__(self, *args, temperature=2.0, k=1000, maximum=None, scale=1):\r\n self.datasets = args\r\n self._dataloaders = {}\r\n for split in self._get_common_splits():\r\n split_datasets = [ds[split] for ds in self.datasets]\r\n mixing_rates = self._calc_mixing_rates(split_datasets,temperature, k, maximum, scale)\r\n weights = []\r\n for i in range(len(self.datasets)):\r\n weights += [mixing_rates[i]]*len(self.datasets[i][split])\r\n self._dataloaders[split] = torch.utils.data.DataLoader(torch.utils.data.ConcatDataset(split_datasets),\r\n sampler=torch.utils.data.sampler.WeightedRandomSampler(\r\n num_samples=len(weights),\r\n weights = weights,\r\n replacement=True),\r\n shuffle=False)\r\n\r\n def _get_common_splits(self):\r\n '''Finds the common splits present in all self.datasets'''\r\n min_set = None\r\n for dataset in self.datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n\r\n def _calc_mixing_rates(self,datasets, temperature=2.0, k=1000, maximum=None, scale=1):\r\n '''Work out the weighting of each dataset based on t and k'''\r\n mixing_rates = []\r\n for dataset in datasets:\r\n rate = len(dataset)\r\n rate *= scale\r\n if maximum:\r\n rate = min(rate, maximum)\r\n if temperature != 1.0:\r\n rate = rate ** (1.0/temperature)\r\n mixing_rates.append(rate)\r\n return mixing_rates\r\n\r\n def sample(self,n,split):\r\n batch = []\r\n for example in self._dataloaders[split]:\r\n batch.append(example)\r\n n -= 1\r\n if n == 0:\r\n return batch\r\n\r\n\r\ndef flatten(dataset,flatten_fn):\r\n for k in dataset.keys():\r\n if isinstance(dataset[k],nlp.Dataset):\r\n dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n\r\n# Squad\r\ndef flatten_squad(example):\r\n return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\"target\":example[\"answers\"][\"text\"]}\r\nsquad = nlp.load_dataset(\"squad\")\r\nflatten(squad,flatten_squad)\r\n\r\n# CNN_DM\r\ndef flatten_cnn_dm(example):\r\n return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\nflatten(cnn_dm,flatten_cnn_dm)\r\n\r\nmultitask_dataset = MultiDataset(squad, cnn_dm)\r\nbatch = multitask_dataset.sample(100,\"train\")\r\n```\r\n\r\nThere's definitely a more sensible way than embedding `DataLoader`s inside. ",
"There is an interesting related investigation by @zphang here https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb",
"Good spot! Here are my thoughts:\r\n\r\n- Aside: Adding `MultitaskModel` to transformers might be a thing to raise - even though having task-specific heads has become unfashionable in recent times in favour of text-to-text type models.\r\n- Adding the task name as an extra field also seems useful for these kind of models which have task-specific heads\r\n- There is some validation of our approach that the user should be expected to `map` datasets into a common form.\r\n- The size-proportional sampling (also called \"Examples-proportional mixing\") used here doesn't perform too badly in the T5 paper (it's comparable to temperature-scaled mixing in many cases but less flexible. This is only reasonable with a `K` maximum size parameter to prevent very large datasets dominating). This might be good for a first prototype using:\r\n ```python\r\n def __iter__(self):\r\n \"\"\"\r\n For each batch, sample a task, and yield a batch from the respective\r\n task Dataloader.\r\n\r\n We use size-proportional sampling, but you could easily modify this\r\n to sample from some-other distribution.\r\n \"\"\"\r\n task_choice_list = []\r\n for i, task_name in enumerate(self.task_name_list):\r\n task_choice_list += [i] * self.num_batches_dict[task_name]\r\n task_choice_list = np.array(task_choice_list)\r\n np.random.shuffle(task_choice_list)\r\n\r\n dataloader_iter_dict = {\r\n task_name: iter(dataloader) \r\n for task_name, dataloader in self.dataloader_dict.items()\r\n }\r\n for task_choice in task_choice_list:\r\n task_name = self.task_name_list[task_choice]\r\n yield next(dataloader_iter_dict[task_name]) \r\n ```\r\n We'd just need to pull samples from the raw datasets and not from `DataLoader`s for each task. We can assume the user has done `dataset.shuffle()` if they want to.\r\n\r\n Other sampling methods can later be implemented by changing how the `task_choice_list` is generated. This should allow more flexibility and not tie us to specific methods for sampling among datasets.\r\n",
"Another thought: Multitasking over benchmarks (represented as Meta-datasets in nlp) is probably a common use case. Would be nice to pass an entire benchmark to our `MultiDataset` wrapper rather than having to pass individual components.",
"Here's a fully working implementation based on the `__iter__` function of @zphang.\r\n\r\n- I've generated the task choice list in the constructor as it allows us to index into the MultiDataset just like a normal dataset. I'm changing `task_choice_list` into a list of `(dataset_idx, example_idx)` so each entry references a unique dataset example. The shuffling has to be done before this as we don't want to shuffle within each task (we assume this is done by the user if this is what they intend).\r\n- I'm slightly concerned this list could become very large if many large datasets were used. Can't see a way round it at the moment though.\r\n- I've used `task.info.builder_name` as the dataset name. Not sure if this is correct.\r\n- I'd love to add some of the other `Dataset` methods (map, slicing by column, etc...). Would be great to implement the whole interface so a single dataset can be simply replaced by this.\r\n- This does everything on the individual example-level. If some application required batches all from a single task in turn we can't really do that.\r\n\r\n```python\r\nimport nlp\r\nimport numpy as np\r\n\r\nclass MultiDataset:\r\n def __init__(self,tasks):\r\n self.tasks = tasks\r\n\r\n # Create random order of tasks\r\n # Using size-proportional sampling\r\n task_choice_list = []\r\n for i, task in enumerate(self.tasks):\r\n task_choice_list += [i] * len(task)\r\n task_choice_list = np.array(task_choice_list)\r\n np.random.shuffle(task_choice_list)\r\n\r\n # Add index into each dataset\r\n # - We don't want to shuffle within each task\r\n counters = {}\r\n self.task_choice_list = []\r\n for i in range(len(task_choice_list)):\r\n idx = counters.get(task_choice_list[i],0)\r\n self.task_choice_list.append((task_choice_list[i],idx))\r\n counters[task_choice_list[i]] = idx + 1\r\n\r\n\r\n def __len__(self):\r\n return np.sum([len(t) for t in self.tasks])\r\n\r\n def __repr__(self):\r\n task_str = \", \".join([str(t) for t in self.tasks])\r\n return f\"MultiDataset(tasks: {task_str})\"\r\n\r\n def __getitem__(self,key):\r\n if isinstance(key, int):\r\n task_idx, example_idx = self.task_choice_list[key]\r\n task = self.tasks[task_idx]\r\n example = task[example_idx]\r\n example[\"task_name\"] = task.info.builder_name\r\n return example\r\n elif isinstance(key, slice):\r\n raise NotImplementedError()\r\n\r\n def __iter__(self):\r\n for i in range(len(self)):\r\n yield self[i]\r\n\r\n\r\ndef load_multitask(*datasets):\r\n '''Create multitask datasets per split'''\r\n\r\n def _get_common_splits(datasets):\r\n '''Finds the common splits present in all self.datasets'''\r\n min_set = None\r\n for dataset in datasets:\r\n if min_set != None:\r\n min_set.intersection(set(dataset.keys()))\r\n else:\r\n min_set = set(dataset.keys())\r\n return min_set\r\n\r\n common_splits = _get_common_splits(datasets)\r\n out = {}\r\n for split in common_splits:\r\n out[split] = MultiDataset([d[split] for d in datasets])\r\n return out\r\n\r\n\r\n##########################################\r\n# Dataset Flattening\r\n\r\ndef flatten(dataset,flatten_fn):\r\n for k in dataset.keys():\r\n if isinstance(dataset[k],nlp.Dataset):\r\n dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n\r\n# Squad\r\ndef flatten_squad(example):\r\n return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\r\n \"target\":example[\"answers\"][\"text\"]}\r\nsquad = nlp.load_dataset(\"squad\")\r\nflatten(squad,flatten_squad)\r\n\r\n# CNN_DM\r\ndef flatten_cnn_dm(example):\r\n return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\ncnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\nflatten(cnn_dm,flatten_cnn_dm)\r\n\r\n#############################################\r\n\r\nmtds = load_multitask(squad,cnn_dm)\r\n\r\nfor example in mtds[\"train\"]:\r\n print(example[\"task_name\"],example[\"target\"])\r\n```\r\nLet me know if you have any thoughts. I've started using this in some of my projects and it seems to work. If people are happy with the general approach for a first version, I can make a pull request.",
"Hey! Happy to jump into the discussion here. I'm still getting familiar with bits of this code, but the reasons I sampled over data loaders rather than datasets is 1) ensuring that each sampled batch corresponds to only 1 task (in case of different inputs formats/downstream models) and 2) potentially having different batch sizes per task (e.g. some tasks have very long/short inputs). How are you currently dealing with these in your PR?",
"The short answer is - I'm not! Everything is currently on a per-example basis. It would be fairly simple to add a `batch_size` argument which would ensure that every `batch_size` examples come from the same task. That should suit most use-cases (unless you wanted to ensure batches all came from the same task and apply something like `SortishSampler` on each task first)\r\n\r\nYour notebook was really inspiring by the way - thanks!",
"@zphang is having different batch sizes per task actually helpful? Would be interesting to know as it's not something I've come across as a technique used by any MTL papers.",
"mt-dnn's [batcher.py](https://github.com/namisan/mt-dnn/blob/master/mt_dnn/batcher.py) might be worth looking at.",
"> @zphang is having different batch sizes per task actually helpful? Would be interesting to know as it's not something I've come across as a technique used by any MTL papers.\r\n\r\nI think having different batch sizes per task is particularly helpful in some scenarios where each task has different amount of data. For example, the problem I'm currently facing is one task has tens of thousands of samples while one task has a couple hundreds. I think in this case different batch size could help. But if using the same batch size is a lot simpler to implement, I guess it makes sense to go with that.",
"I think that instead of proportional to size sampling you should specify weights or probabilities for drawing a batch from each dataset. We should also ensure that the smaller datasets are repeated so that the encoder layer doesn't overtrain on the largest dataset.",
"Are there any references for people doing different batch sizes per task in the literature? I've only seen constant batch sizes with differing numbers of batches for each task which seems sufficient to prevent the impact of large datasets (Read 3.5.3 of the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) for example).\r\n\r\n",
"Hi,\r\nregarding building T5 dataset , I think we can use datasets https://github.com/huggingface/datasets and then need something similar to tf.data.experimental.sample_from_datasets, do you know if similar functionality exist in pytorch? Which can sample multiple datasets with the given rates. thanks. ",
"Is this feature part of a `datasets` release yet? ",
"> Here's a fully working implementation based on the `__iter__` function of @zphang.\r\n> \r\n> * I've generated the task choice list in the constructor as it allows us to index into the MultiDataset just like a normal dataset. I'm changing `task_choice_list` into a list of `(dataset_idx, example_idx)` so each entry references a unique dataset example. The shuffling has to be done before this as we don't want to shuffle within each task (we assume this is done by the user if this is what they intend).\r\n> * I'm slightly concerned this list could become very large if many large datasets were used. Can't see a way round it at the moment though.\r\n> * I've used `task.info.builder_name` as the dataset name. Not sure if this is correct.\r\n> * I'd love to add some of the other `Dataset` methods (map, slicing by column, etc...). Would be great to implement the whole interface so a single dataset can be simply replaced by this.\r\n> * This does everything on the individual example-level. If some application required batches all from a single task in turn we can't really do that.\r\n> \r\n> ```python\r\n> import nlp\r\n> import numpy as np\r\n> \r\n> class MultiDataset:\r\n> def __init__(self,tasks):\r\n> self.tasks = tasks\r\n> \r\n> # Create random order of tasks\r\n> # Using size-proportional sampling\r\n> task_choice_list = []\r\n> for i, task in enumerate(self.tasks):\r\n> task_choice_list += [i] * len(task)\r\n> task_choice_list = np.array(task_choice_list)\r\n> np.random.shuffle(task_choice_list)\r\n> \r\n> # Add index into each dataset\r\n> # - We don't want to shuffle within each task\r\n> counters = {}\r\n> self.task_choice_list = []\r\n> for i in range(len(task_choice_list)):\r\n> idx = counters.get(task_choice_list[i],0)\r\n> self.task_choice_list.append((task_choice_list[i],idx))\r\n> counters[task_choice_list[i]] = idx + 1\r\n> \r\n> \r\n> def __len__(self):\r\n> return np.sum([len(t) for t in self.tasks])\r\n> \r\n> def __repr__(self):\r\n> task_str = \", \".join([str(t) for t in self.tasks])\r\n> return f\"MultiDataset(tasks: {task_str})\"\r\n> \r\n> def __getitem__(self,key):\r\n> if isinstance(key, int):\r\n> task_idx, example_idx = self.task_choice_list[key]\r\n> task = self.tasks[task_idx]\r\n> example = task[example_idx]\r\n> example[\"task_name\"] = task.info.builder_name\r\n> return example\r\n> elif isinstance(key, slice):\r\n> raise NotImplementedError()\r\n> \r\n> def __iter__(self):\r\n> for i in range(len(self)):\r\n> yield self[i]\r\n> \r\n> \r\n> def load_multitask(*datasets):\r\n> '''Create multitask datasets per split'''\r\n> \r\n> def _get_common_splits(datasets):\r\n> '''Finds the common splits present in all self.datasets'''\r\n> min_set = None\r\n> for dataset in datasets:\r\n> if min_set != None:\r\n> min_set.intersection(set(dataset.keys()))\r\n> else:\r\n> min_set = set(dataset.keys())\r\n> return min_set\r\n> \r\n> common_splits = _get_common_splits(datasets)\r\n> out = {}\r\n> for split in common_splits:\r\n> out[split] = MultiDataset([d[split] for d in datasets])\r\n> return out\r\n> \r\n> \r\n> ##########################################\r\n> # Dataset Flattening\r\n> \r\n> def flatten(dataset,flatten_fn):\r\n> for k in dataset.keys():\r\n> if isinstance(dataset[k],nlp.Dataset):\r\n> dataset[k] = dataset[k].map(flatten_fn,remove_columns=dataset[k].column_names)\r\n> \r\n> # Squad\r\n> def flatten_squad(example):\r\n> return {\"source\": \"squad context: \" + example['context'] + \" question: \" + example['question'],\r\n> \"target\":example[\"answers\"][\"text\"]}\r\n> squad = nlp.load_dataset(\"squad\")\r\n> flatten(squad,flatten_squad)\r\n> \r\n> # CNN_DM\r\n> def flatten_cnn_dm(example):\r\n> return {\"source\": \"cnn_dm: \" + example['article'],\"target\":[example[\"highlights\"]]}\r\n> cnn_dm = nlp.load_dataset(\"cnn_dailymail\", \"3.0.0\")\r\n> flatten(cnn_dm,flatten_cnn_dm)\r\n> \r\n> #############################################\r\n> \r\n> mtds = load_multitask(squad,cnn_dm)\r\n> \r\n> for example in mtds[\"train\"]:\r\n> print(example[\"task_name\"],example[\"target\"])\r\n> ```\r\n> \r\n> Let me know if you have any thoughts. I've started using this in some of my projects and it seems to work. If people are happy with the general approach for a first version, I can make a pull request.\r\n\r\nNot sure if this is what I'm looking for, but I implemented a version of Examples-Proportional mixing supporting only the basic feature [here](https://stackoverflow.com/a/74070116/10732321), seems to work in my project. ",
"You can use `interleave_datasets` to mix several datasets together. By default it alternates between all the datasets, but you can also provide sampling probabilities if you want to oversample from one of the datasets\r\n\r\n```python\r\nfrom datasets import load_dataset, interleave_datasets\r\n\r\nsquad = load_dataset(\"squad\", split=\"train\")\r\ncnn_dm = load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"train\")\r\nds = interleave_datasets([squad, cnn_dm])\r\n\r\nprint(ds[0])\r\n# {'id': '5733be284776f41900661182',\r\n# 'title': 'University_of_Notre_Dame',\r\n# 'context': 'Architecturally, the school has a Catholic character...',\r\n# 'question': 'To whom did the Virgin Mary allegedly appear in 1858 in Lourdes France?',\r\n# 'answers': {'text': ['Saint Bernadette Soubirous'], 'answer_start': [515]},\r\n# 'article': None,\r\n# 'highlights': None}\r\nprint(ds[1])\r\n# {'id': '42c027e4ff9730fbb3de84c1af0d2c506e41c3e4',\r\n# 'title': None,\r\n# 'context': None,\r\n# 'question': None,\r\n# 'answers': None,\r\n# 'article': 'LONDON, England (Reuters) -- Harry Potter star Daniel Radcliffe...',\r\n# 'highlights': \"Harry Potter star Daniel Radcliffe...\"}\r\n```\r\n\r\nsee docs at https://huggingface.co/docs/datasets/v2.6.1/en/package_reference/main_classes#datasets.interleave_datasets",
"I also have this implementation of multi-task sampler here which I used it to tune T5: https://github.com/rabeehk/hyperformer/blob/main/hyperformer/data/multitask_sampler.py "
] |
https://api.github.com/repos/huggingface/datasets/issues/1579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1579/comments | https://api.github.com/repos/huggingface/datasets/issues/1579/events | https://github.com/huggingface/datasets/pull/1579 | 767,808,465 | MDExOlB1bGxSZXF1ZXN0NTQwMzk5OTY5 | 1,579 | Adding CLIMATE-FEVER dataset | [] | closed | false | null | 5 | 2020-12-15T16:49:22Z | 2020-12-22T13:43:16Z | 2020-12-22T13:43:15Z | null | This PR request the addition of the CLIMATE-FEVER dataset:
A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present.
More information can be found at:
- Homepage: <http://climatefever.ai>
- Paper: <https://arxiv.org/abs/2012.00614>
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1579/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1579/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1579.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1579",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1579.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1579"
} | true | [
"I `git rebase`ed my branch to `upstream/master` as suggested in point 7 of <https://huggingface.co/docs/datasets/share_dataset.html> and subsequently used `git pull` to be able to push to my remote branch. However, I think this messed up the history.\r\n\r\nPlease let me know if I should create a clean new PR with my changes.\r\n\r\nUpdate: I also fixed the dataset name in the Dataset Card.",
"Dear @SBrandeis , @lhoestq . I am not sure how to fix the PR with respect to the additional files that are currently included in the commits. Could you provide me with an example? Otherwise I would be happy to close/re-open another PR. Please let me know if anything is missing for the review.",
"Hi @tdiggelm, thanks for the contribution! This dataset is really awesome.\r\nI believe creating a new branch from master and opening a new PR with your changes is the simplest option since no review has been done yet. Feel free to ping us when it's done.",
"> Hi @tdiggelm, thanks for the contribution! This dataset is really awesome.\r\n> I believe creating a new branch from master and opening a new PR with your changes is the simplest option since no review has been done yet. Feel free to ping us when it's done.\r\n\r\nThank you very much for your quick reply! Will do ASAP and ping you when done.",
"closing in favor of #1623"
] |
https://api.github.com/repos/huggingface/datasets/issues/2543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2543/comments | https://api.github.com/repos/huggingface/datasets/issues/2543/events | https://github.com/huggingface/datasets/issues/2543 | 928,571,915 | MDU6SXNzdWU5Mjg1NzE5MTU= | 2,543 | switching some low-level log.info's to log.debug? | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 1 | 2021-06-23T19:26:55Z | 2021-06-25T13:40:19Z | 2021-06-25T13:40:19Z | null | In https://github.com/huggingface/transformers/pull/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components.
The trouble is that now we get a ton of these:
```
06/23/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 acquired on /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock
06/23/2021 12:15:31 - INFO - datasets.arrow_writer - Done writing 50 examples in 12280 bytes /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.
06/23/2021 12:15:31 - INFO - datasets.arrow_dataset - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns.
06/23/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 released on /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock
```
May I suggest that these can be `log.debug` as it's no informative to the user.
More examples: these are not informative - too much information:
```
06/23/2021 12:14:26 - INFO - datasets.load - Checking /home/stas/.cache/huggingface/datasets/downloads/459933f1fe47711fad2f6ff8110014ff189120b45ad159ef5b8e90ea43a174fa.e23e7d1259a8c6274a82a42a8936dd1b87225302c6dc9b7261beb3bc2daac640.py for additional imports.
06/23/2021 12:14:27 - INFO - datasets.builder - Constructing Dataset for split train, validation, test, from /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a
```
While these are:
```
06/23/2021 12:14:27 - INFO - datasets.info - Loading Dataset Infos from /home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt16/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a
06/23/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (/home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a)
```
I also realize that `transformers` examples don't have do use `info` for `datasets` to let the default `warning` keep logging to less noisy.
But I think currently the log levels are slightly misused and skewed by 1 level. Many `warnings` will better be `info`s and most `info`s be `debug`.
e.g.:
```
06/23/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (/home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a)
```
why is this a warning? it is informing me that the cache is used, there is nothing to be worried about. I'd have it as `info`.
Warnings are typically something that's bordering error or the first thing to check when things don't work as expected.
infrequent info is there to inform of the different stages or important events.
Everything else is debug.
At least the way I understand things.
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2543/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2543/timeline | null | completed | null | null | false | [
"Hi @stas00, thanks for pointing out this issue with logging.\r\n\r\nI agree that `datasets` can sometimes be too verbose... I can create a PR and we could discuss there the choice of the log levels for different parts of the code."
] |
https://api.github.com/repos/huggingface/datasets/issues/3306 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3306/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3306/comments | https://api.github.com/repos/huggingface/datasets/issues/3306/events | https://github.com/huggingface/datasets/issues/3306 | 1,059,185,860 | I_kwDODunzps4_IeTE | 3,306 | nested sequence feature won't encode example if the first item of the outside sequence is an empty list | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-11-20T16:57:54Z | 2021-12-08T13:02:15Z | 2021-12-08T13:02:15Z | null | ## Describe the bug
As the title, nested sequence feature won't encode example if the first item of the outside sequence is an empty list.
## Steps to reproduce the bug
```python
from datasets import Features, Sequence, ClassLabel
features = Features({
'x': Sequence(Sequence(ClassLabel(names=['a', 'b']))),
})
print(features.encode_batch({
'x': [
[['a'], ['b']],
[[], ['b']],
]
}))
```
## Expected results
print `{'x': [[[0], [1]], [[], ['1']]]}`
## Actual results
print `{'x': [[[0], [1]], [[], ['b']]]}`
## Environment info
- `datasets` version: 1.15.1
- Platform: Linux-5.13.0-21-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.0
## Additional information
I think the issue stems from [here](https://github.com/huggingface/datasets/blob/8555197a3fe826e98bd0206c2d031c4488c53c5c/src/datasets/features/features.py#L847-L848).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3306/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3306/timeline | null | completed | null | null | false | [
"knock knock",
"Hi, thanks for reporting! I've linked a PR that should fix the issue.",
"I've checked the PR and it looks great, thanks a lot!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2146/comments | https://api.github.com/repos/huggingface/datasets/issues/2146/events | https://github.com/huggingface/datasets/issues/2146 | 844,673,244 | MDU6SXNzdWU4NDQ2NzMyNDQ= | 2,146 | Dataset file size on disk is very large with 3D Array | [] | open | false | null | 6 | 2021-03-30T14:46:09Z | 2021-04-16T13:07:02Z | null | null | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": "",
"homepage": "",
"license": "",
"features": {
"image": {
"shape": [224, 224, 3],
"dtype": "uint8",
"id": null,
"_type": "Array3D",
}
},
"post_processed": null,
"supervised_keys": null,
"builder_name": "shot_type_image_dataset",
"config_name": "default",
"version": {
"version_str": "0.0.0",
"description": null,
"major": 0,
"minor": 0,
"patch": 0,
},
"splits": {
"train": {
"name": "train",
"num_bytes": 520803408,
"num_examples": 1479,
"dataset_name": "shot_type_image_dataset",
}
},
"download_checksums": {
"": {
"num_bytes": 16940447118,
"checksum": "5854035705efe08b0ed8f3cf3da7b4d29cba9055c2d2d702c79785350d72ee03",
}
},
"download_size": 16940447118,
"post_processing_size": null,
"dataset_size": 520803408,
"size_in_bytes": 17461250526,
}`
I have created the same dataset with tensorflow_dataset and it takes only 125MB on disk.
I am wondering, is it normal behavior ? I understand `Datasets` uses Arrow for serialization wheres tf uses TF Records.
This might be a problem for large dataset.
Thanks for your help.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2146/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2146/timeline | null | null | null | null | false | [
"Hi ! In the arrow file we store all the integers as uint8.\r\nSo your arrow file should weigh around `height x width x n_channels x n_images` bytes.\r\n\r\nWhat feature type do your TFDS dataset have ?\r\n\r\nIf it uses a `tfds.features.Image` type, then what is stored is the encoded data (as png or jpg for example). Since these encodings are made for compression, the resulting tfrecord is smaller that the arrow file.\r\n\r\nWe are working on adding a similar feature in `datasets`: the ability to store the encoded data instead of the raw integers for images, but also for audio data. This way, arrow files will have similar sizes as tfrecords for images.",
"Thanks for the prompt response. You're right about the encoding, I have the `tfds.features.Image` feature type you mentioned.\r\nHowever, as described in the `dataset_info.json`, my dataset is made of 1479 (224x224x3) images. 1479 x 224 x 224 x 3 = 222630912 bytes which is far from the actual size 520803408 bytes. \r\n\r\nAnyway I look forward to the Image feature type in `datasets`. ",
"@lhoestq I changed the data structure so I have a 2D Array feature type instead of a 3D Array by grouping the two last dimensions ( a 224x672 2D Array instead of a 224x224x3 3D Array). The file size is now 223973964 bytes, nearly half the previous size! Which is around of what I would expect.\r\nI found similar behavior in existing `datasets` collection, when comparing black and white vs color image, for example MNIST vs CIFAR. ",
"Interesting !\r\nThis may be because of the offsets that are stored with the array data.\r\n\r\nCurrently the offsets are stored even if the `shape` of the arrays is fixed. This was needed because of some issues with pyarrow a few months ago. I think these issues have been addressed now, so we can probably try to remove them to make the file lighter.\r\n\r\nIdeally in your case the floats data should be 220 MB for both Array2D and Array3D",
"Yeah for sure, can you be a bit more specific about where the offset is stored in the code base ? And any reference to pyarrow issues if you have some. I would be very interested in contributing to `datasets` by trying to fix this issue. ",
"Pyarrow has two types of lists: variable length lists and fixed size lists.\r\nCurrently we store the ArrayXD data as variable length lists. They take more disk space because they must store both actual data and offsets.\r\nIn the `datasets` code this is done here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/dbac87c8a083f806467f5afc4ec9b401a7e4c15c/src/datasets/features.py#L346-L352\r\n\r\nTo use a fixed length list, one should use the `list_size` argument of `pyarrow.list_()`.\r\nI believe this would work directly modulo some changes in the numpy conversion here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/dbac87c8a083f806467f5afc4ec9b401a7e4c15c/src/datasets/features.py#L381-L395"
] |
https://api.github.com/repos/huggingface/datasets/issues/674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/674/comments | https://api.github.com/repos/huggingface/datasets/issues/674/events | https://github.com/huggingface/datasets/issues/674 | 709,661,006 | MDU6SXNzdWU3MDk2NjEwMDY= | 674 | load_dataset() won't download in Windows | [] | closed | false | null | 3 | 2020-09-27T03:56:25Z | 2020-10-05T08:28:18Z | 2020-10-05T08:28:18Z | null | I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've waited upwards of 18 hours to download the 'multi-news' dataset (which isn't very big), and still nothing. I've tried running it through different IDE's and the command line, but it had the same behavior. I've also tried it with all virus and malware protection turned off. I've made sure python and all IDE's are exceptions to the firewall and all the requisite permissions are enabled.
Additionally, I checked to see if other packages could download content such as an nltk corpus, and they could. I've also run the same script using Ubuntu and it downloaded fine (and quickly). When I copied the downloaded datasets from my Ubuntu drive to my Windows .cache folder it worked fine by reusing the already-downloaded dataset, but it's cumbersome to do that for every dataset I want to try in my Windows environment.
Could this be a bug, or is there something I'm doing wrong or not thinking of?
Thanks. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/674/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/674/timeline | null | completed | null | null | false | [
"I have the same issue. Tried to download a few of them and not a single one is downloaded successfully.\r\n\r\nThis is the output:\r\n```\r\n>>> dataset = load_dataset('blended_skill_talk', split='train')\r\nUsing custom data configuration default <-- This step never ends\r\n```",
"This was fixed in #644 \r\nI'll do a new release soon :)\r\n\r\nIn the meantime you can run it by installing from source",
"Closing since version 1.1.0 got released with Windows support :) \r\nLet me know if it works for you now"
] |
https://api.github.com/repos/huggingface/datasets/issues/3965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3965/comments | https://api.github.com/repos/huggingface/datasets/issues/3965/events | https://github.com/huggingface/datasets/issues/3965 | 1,173,708,739 | I_kwDODunzps5F9V_D | 3,965 | TypeError: Couldn't cast array of type for JSONLines dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-03-18T15:17:53Z | 2022-05-06T16:13:51Z | 2022-05-06T16:13:51Z | null | ## Describe the bug
One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below).
This reminds me a bit of #2799 where one can load the dataset in `pandas` but not in `datasets` and perhaps increasing the `block_size` is needed again.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from huggingface_hub import hf_hub_url
import pandas as pd
# returns 'https://huggingface.co/datasets/Evan/spaCy-github-issues/resolve/main/spacy-issues.jsonl'
data_files = hf_hub_url(repo_id="Evan/spaCy-github-issues", filename="spacy-issues.jsonl", repo_type="dataset")
# throws TypeError: Couldn't cast array of type
dset = load_dataset("json", data_files=data_files, split="test")
# no problem with pandas - note this take a while as the file is >2GB
df = pd.read_json(data_files, orient="records", lines=True)
df.head()
```
## Expected results
I can load any line-separated JSON file, similar to pandas.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset
builder_instance.download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 683, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 1136, in _prepare_split
writer.write_table(table)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 511, in write_table
pa_table = table_cast(pa_table, self._schema)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1121, in table_cast
return cast_table_to_features(table, Features.from_arrow_schema(schema))
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in cast_table_to_features
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1086, in cast_array_to_feature
return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 920, in wrapper
return func(array, *args, **kwargs)
File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1019, in array_cast
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}")
TypeError: Couldn't cast array of type
struct<url: string, html_url: string, labels_url: string, id: int64, node_id: string, number: int64, title: string, description: string, creator: struct<login: string, id: int64, node_id: string, avatar_url: string, gravatar_id: string, url: string, html_url: string, followers_url: string, following_url: string, gists_url: string, starred_url: string, subscriptions_url: string, organizations_url: string, repos_url: string, events_url: string, received_events_url: string, type: string, site_admin: bool>, open_issues: int64, closed_issues: int64, state: string, created_at: timestamp[s], updated_at: timestamp[s], due_on: null, closed_at: timestamp[s]>
to
null
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.7
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3965/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3965/timeline | null | completed | null | null | false | [
"Hi @lewtun, thanks for reporting.\r\n\r\nIt seems that our library fails at inferring the dtype of the columns:\r\n- `milestone`\r\n- `performed_via_github_app` \r\n\r\n(and assigns them `null` dtype)."
] |
https://api.github.com/repos/huggingface/datasets/issues/2858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2858/comments | https://api.github.com/repos/huggingface/datasets/issues/2858/events | https://github.com/huggingface/datasets/pull/2858 | 984,145,568 | MDExOlB1bGxSZXF1ZXN0NzIzNjEzNzQ0 | 2,858 | Fix s3fs version in CI | [] | closed | false | null | 0 | 2021-08-31T18:05:43Z | 2021-09-06T13:33:35Z | 2021-08-31T21:29:51Z | null | The latest s3fs version has new constrains on aiobotocore, and therefore on boto3 and botocore
This PR changes the constrains to avoid the new conflicts
In particular it pins the version of s3fs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2858/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2858/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2858.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2858",
"merged_at": "2021-08-31T21:29:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2858.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2858"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5569/comments | https://api.github.com/repos/huggingface/datasets/issues/5569/events | https://github.com/huggingface/datasets/pull/5569 | 1,597,132,383 | PR_kwDODunzps5KnwHD | 5,569 | pass the dataset features to the IterableDataset.from_generator function | [] | closed | false | null | 3 | 2023-02-23T16:06:04Z | 2023-02-24T14:06:37Z | 2023-02-23T18:15:16Z | null | [5558](https://github.com/huggingface/datasets/issues/5568) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5569/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5569/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5569.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5569",
"merged_at": "2023-02-23T18:15:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5569.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5569"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008753 / 0.011353 (-0.002600) | 0.004877 / 0.011008 (-0.006131) | 0.098320 / 0.038508 (0.059812) | 0.034123 / 0.023109 (0.011014) | 0.289539 / 0.275898 (0.013641) | 0.323584 / 0.323480 (0.000104) | 0.007455 / 0.007986 (-0.000531) | 0.004763 / 0.004328 (0.000434) | 0.074350 / 0.004250 (0.070100) | 0.039018 / 0.037052 (0.001966) | 0.294319 / 0.258489 (0.035830) | 0.348686 / 0.293841 (0.054845) | 0.037814 / 0.128546 (-0.090732) | 0.011808 / 0.075646 (-0.063838) | 0.333808 / 0.419271 (-0.085464) | 0.047758 / 0.043533 (0.004225) | 0.298533 / 0.255139 (0.043394) | 0.320790 / 0.283200 (0.037590) | 0.095909 / 0.141683 (-0.045774) | 1.434422 / 1.452155 (-0.017732) | 1.509703 / 1.492716 (0.016987) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201728 / 0.018006 (0.183722) | 0.432243 / 0.000490 (0.431753) | 0.002760 / 0.000200 (0.002560) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026090 / 0.037411 (-0.011321) | 0.105914 / 0.014526 (0.091388) | 0.115869 / 0.176557 (-0.060688) | 0.178291 / 0.737135 (-0.558844) | 0.121435 / 0.296338 (-0.174904) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.402304 / 0.215209 (0.187095) | 3.995183 / 2.077655 (1.917529) | 1.794548 / 1.504120 (0.290428) | 1.603034 / 1.541195 (0.061839) | 1.643836 / 1.468490 (0.175346) | 0.694934 / 4.584777 (-3.889843) | 3.695128 / 3.745712 (-0.050584) | 2.018582 / 5.269862 (-3.251279) | 1.294315 / 4.565676 (-3.271362) | 0.085346 / 0.424275 (-0.338929) | 0.012201 / 0.007607 (0.004594) | 0.510057 / 0.226044 (0.284012) | 5.123404 / 2.268929 (2.854476) | 2.319089 / 55.444624 (-53.125535) | 1.930935 / 6.876477 (-4.945542) | 1.939700 / 2.142072 (-0.202372) | 0.848282 / 4.805227 (-3.956945) | 0.165561 / 6.500664 (-6.335103) | 0.062028 / 0.075469 (-0.013441) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.220576 / 1.841788 (-0.621212) | 14.413853 / 8.074308 (6.339544) | 14.027156 / 10.191392 (3.835764) | 0.170109 / 0.680424 (-0.510315) | 0.029412 / 0.534201 (-0.504789) | 0.443898 / 0.579283 (-0.135386) | 0.433059 / 0.434364 (-0.001305) | 0.533465 / 0.540337 (-0.006872) | 0.626562 / 1.386936 (-0.760374) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007148 / 0.011353 (-0.004205) | 0.005019 / 0.011008 (-0.005989) | 0.073132 / 0.038508 (0.034624) | 0.032763 / 0.023109 (0.009654) | 0.329309 / 0.275898 (0.053411) | 0.361658 / 0.323480 (0.038178) | 0.005683 / 0.007986 (-0.002302) | 0.003793 / 0.004328 (-0.000536) | 0.071858 / 0.004250 (0.067608) | 0.045160 / 0.037052 (0.008107) | 0.335852 / 0.258489 (0.077363) | 0.384274 / 0.293841 (0.090433) | 0.036647 / 0.128546 (-0.091899) | 0.012217 / 0.075646 (-0.063430) | 0.086265 / 0.419271 (-0.333007) | 0.049223 / 0.043533 (0.005690) | 0.331460 / 0.255139 (0.076321) | 0.353175 / 0.283200 (0.069975) | 0.102214 / 0.141683 (-0.039469) | 1.491451 / 1.452155 (0.039296) | 1.553702 / 1.492716 (0.060985) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222972 / 0.018006 (0.204966) | 0.432862 / 0.000490 (0.432372) | 0.000421 / 0.000200 (0.000221) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028401 / 0.037411 (-0.009010) | 0.109331 / 0.014526 (0.094805) | 0.119246 / 0.176557 (-0.057311) | 0.187997 / 0.737135 (-0.549138) | 0.124212 / 0.296338 (-0.172127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427240 / 0.215209 (0.212031) | 4.271619 / 2.077655 (2.193964) | 2.104948 / 1.504120 (0.600828) | 1.910624 / 1.541195 (0.369430) | 1.943812 / 1.468490 (0.475322) | 0.711466 / 4.584777 (-3.873311) | 3.778987 / 3.745712 (0.033275) | 2.976258 / 5.269862 (-2.293604) | 1.807591 / 4.565676 (-2.758086) | 0.088286 / 0.424275 (-0.335989) | 0.012461 / 0.007607 (0.004854) | 0.527554 / 0.226044 (0.301509) | 5.279461 / 2.268929 (3.010532) | 2.517911 / 55.444624 (-52.926713) | 2.176557 / 6.876477 (-4.699920) | 2.205322 / 2.142072 (0.063249) | 0.855012 / 4.805227 (-3.950215) | 0.170007 / 6.500664 (-6.330658) | 0.065273 / 0.075469 (-0.010196) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282785 / 1.841788 (-0.559003) | 14.819500 / 8.074308 (6.745192) | 13.282211 / 10.191392 (3.090819) | 0.161804 / 0.680424 (-0.518620) | 0.017615 / 0.534201 (-0.516586) | 0.420159 / 0.579283 (-0.159124) | 0.441304 / 0.434364 (0.006940) | 0.531704 / 0.540337 (-0.008634) | 0.627477 / 1.386936 (-0.759459) |\n\n</details>\n</details>\n\n\n",
"Hmm I think we need to add more tests. Not sure what would happen with :\r\n- decodable features that may end up decoded twice \r\n- formatted datasets \r\n\r\nI'd be in favor of reverting this until we checked those"
] |
https://api.github.com/repos/huggingface/datasets/issues/2992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2992/comments | https://api.github.com/repos/huggingface/datasets/issues/2992/events | https://github.com/huggingface/datasets/pull/2992 | 1,012,325,594 | PR_kwDODunzps4sg4ZP | 2,992 | Fix f1 metric with None average | [] | closed | false | null | 0 | 2021-09-30T15:31:57Z | 2021-10-01T14:17:39Z | 2021-10-01T14:17:38Z | null | Fix #2979. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2992/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2992/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2992.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2992",
"merged_at": "2021-10-01T14:17:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2992.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2992"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5520/comments | https://api.github.com/repos/huggingface/datasets/issues/5520/events | https://github.com/huggingface/datasets/issues/5520 | 1,578,417,074 | I_kwDODunzps5eFLuy | 5,520 | ClassLabel.cast_storage raises TypeError when called on an empty IntegerArray | [] | closed | false | null | 0 | 2023-02-09T18:46:52Z | 2023-02-12T11:17:18Z | 2023-02-12T11:17:18Z | null | ### Describe the bug
`ClassLabel.cast_storage` raises `TypeError` when called on an empty `IntegerArray`.
### Steps to reproduce the bug
Minimal steps:
```python
import pyarrow as pa
from datasets import ClassLabel
ClassLabel(names=['foo', 'bar']).cast_storage(pa.array([], pa.int64()))
```
In practice, this bug arises in situations like the one below:
```python
from datasets import ClassLabel, Dataset, Features, Sequence
dataset = Dataset.from_dict({'labels': [[], []]}, features=Features({'labels': Sequence(ClassLabel(names=['foo', 'bar']))}))
# this raises TypeError
dataset.map(batched=True, batch_size=1)
```
### Expected behavior
`ClassLabel.cast_storage` should return an empty Int64Array.
### Environment info
- `datasets` version: 2.9.1.dev0
- Platform: Linux-4.15.0-1032-aws-x86_64-with-glibc2.27
- Python version: 3.10.6
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5520/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5520/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/195/comments | https://api.github.com/repos/huggingface/datasets/issues/195/events | https://github.com/huggingface/datasets/pull/195 | 624,858,686 | MDExOlB1bGxSZXF1ZXN0NDIzMTg1NTAy | 195 | [Dummy data command] add new case to command | [] | closed | false | null | 1 | 2020-05-26T12:50:47Z | 2020-05-26T14:38:28Z | 2020-05-26T14:38:27Z | null | Qanta: #194 introduces a case that was not noticed before. This change in code helps community users to have an easier time creating the dummy data. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/195/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/195/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/195.diff",
"html_url": "https://github.com/huggingface/datasets/pull/195",
"merged_at": "2020-05-26T14:38:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/195.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/195"
} | true | [
"@lhoestq - tiny change in the dummy data command, should be good to merge."
] |
https://api.github.com/repos/huggingface/datasets/issues/3297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3297/comments | https://api.github.com/repos/huggingface/datasets/issues/3297/events | https://github.com/huggingface/datasets/issues/3297 | 1,058,263,859 | I_kwDODunzps4_E9Mz | 3,297 | .map() cache is wrongfully reused - only happens when the mapping function is imported | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 3 | 2021-11-19T08:18:36Z | 2023-01-30T12:40:17Z | null | null | ## Describe the bug
When `.map` is used with a mapping function that is imported, the cache is reused even if the mapping function has been modified.
The reason for this is that `dill` that is used for creating the fingerprint [pickles imported functions by reference](https://stackoverflow.com/a/67851411).
I guess it is not a widespread case, but it can still lead to unwanted results unnoticeably.
## Steps to reproduce the bug
Create files `a.py` and `b.py`:
```python
# a.py
from datasets import load_dataset
def main():
squad = load_dataset("squad")
squad.map(mapping_func, batched=True)
def mapping_func(examples):
ID_LENGTH = 4
examples["id"] = [id_[:ID_LENGTH] for id_ in examples["id"]]
return examples
if __name__ == "__main__":
main()
```
```python
# b.py
from datasets import load_dataset
from a import mapping_func
def main():
squad = load_dataset("squad")
squad.map(mapping_func, batched=True)
if __name__ == "__main__":
main()
```
Run `python b.py` twice: In the first run you will see tqdm bars showing that the data is processed, and in the second run you will see "Loading cached processed dataset at...".
Now change `ID_LENGTH` to another number in order to change the mapping function, and run `python b.py` again. You'll see that `.map` loads from the cache the result of the previous mapping function.
## Expected results
Run `python a.py` twice: In the first run you will see tqdm bars showing that the data is processed, and in the second run you will see "Loading cached processed dataset at...".
Now change `ID_LENGTH` to another number in order to change the mapping function, and run `python a.py` again. You'll see that the dataset is being processed and that there's no reuse of the previous mapping function result.
## Workaround
Put the mapping function inside a dummy class as a static method:
```python
# a.py
class MappingFuncClass:
@staticmethod
def mapping_func(examples):
ID_LENGTH = 4
examples["id"] = [id_[:ID_LENGTH] for id_ in examples["id"]]
return examples
```
```python
# b.py
from datasets import load_dataset
from a import MappingFuncClass
def main():
squad = load_dataset("squad")
squad.map(MappingFuncClass.mapping_func, batched=True)
if __name__ == "__main__":
main()
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3297/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3297/timeline | null | null | null | null | false | [
"Hi ! Thanks for reporting. Indeed this is a current limitation of the usage we have of `dill` in `datasets`. I'd suggest you use your workaround for now until we find a way to fix this. Maybe functions that are not coming from a module not installed with pip should be dumped completely, rather than only taking their locations into account",
"I agree. Sounds like a solution for it would be pretty dirty, even [cloudpickle](https://stackoverflow.com/a/16891169) doesn't help in this case.\r\nIn the meanwhile I think that adding a warning and the workaround somewhere in the documentation can be helpful.",
"For anyone interested, I see that with `dill==0.3.6` the workaround I suggested doesn't work anymore.\r\nI opened an issue about it: https://github.com/uqfoundation/dill/issues/572.\r\n\r\n "
] |
https://api.github.com/repos/huggingface/datasets/issues/3391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3391/comments | https://api.github.com/repos/huggingface/datasets/issues/3391/events | https://github.com/huggingface/datasets/issues/3391 | 1,072,849,055 | I_kwDODunzps4_8mCf | 3,391 | method to select columns | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 1 | 2021-12-07T02:44:19Z | 2021-12-07T02:45:27Z | 2021-12-07T02:45:27Z | null | **Is your feature request related to a problem? Please describe.**
* There is currently no way to select some columns of a dataset. In pandas, one can use `df[['col1', 'col2']]` to select columns, but in `datasets`, it results in error.
**Describe the solution you'd like**
* A new method that can be used to create a new dataset with only a list of specified columns.
**Describe alternatives you've considered**
`.remove_columns(self, columns: Union[str, List[str]], inverse: bool = False)`
Or
`.select(self, indices: Iterable = None, columns: List[str] = None)`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3391/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3391/timeline | null | completed | null | null | false | [
"duplicate of #2655"
] |
https://api.github.com/repos/huggingface/datasets/issues/5087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5087/comments | https://api.github.com/repos/huggingface/datasets/issues/5087/events | https://github.com/huggingface/datasets/pull/5087 | 1,400,487,967 | PR_kwDODunzps5AW-N9 | 5,087 | Fix filter with empty indices | [] | closed | false | null | 1 | 2022-10-07T01:07:00Z | 2022-10-07T18:43:03Z | 2022-10-07T18:40:26Z | null | Fix #5085 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5087/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5087/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5087.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5087",
"merged_at": "2022-10-07T18:40:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5087.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5087"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2906/comments | https://api.github.com/repos/huggingface/datasets/issues/2906/events | https://github.com/huggingface/datasets/pull/2906 | 995,962,905 | PR_kwDODunzps4rulH- | 2,906 | feat: 🎸 add a function to get a dataset config's split names | [] | closed | false | null | 1 | 2021-09-14T12:31:22Z | 2021-10-04T09:55:38Z | 2021-10-04T09:55:37Z | null | Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub
Questions:
- [x] I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct?
-> no: reverted
- [x] Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)
-> yes: added | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2906/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2906/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2906.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2906",
"merged_at": "2021-10-04T09:55:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2906.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2906"
} | true | [
"> Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)\r\n\r\nYes totally :) This tutorial should indeed mention this, given how fundamental it is"
] |
https://api.github.com/repos/huggingface/datasets/issues/5363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5363/comments | https://api.github.com/repos/huggingface/datasets/issues/5363/events | https://github.com/huggingface/datasets/issues/5363 | 1,498,171,317 | I_kwDODunzps5ZTEe1 | 5,363 | Dataset.from_generator() crashes on simple example | [] | closed | false | null | 0 | 2022-12-15T10:21:28Z | 2022-12-15T11:51:33Z | 2022-12-15T11:51:33Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5363/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5363/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2618 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2618/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2618/comments | https://api.github.com/repos/huggingface/datasets/issues/2618/events | https://github.com/huggingface/datasets/issues/2618 | 940,852,640 | MDU6SXNzdWU5NDA4NTI2NDA= | 2,618 | `filelock.py` Error | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 1 | 2021-07-09T15:12:49Z | 2021-07-12T06:20:30Z | null | null | ## Describe the bug
It seems that the `filelock.py` went error.
```
>>> ds=load_dataset('xsum')
^CTraceback (most recent call last):
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
OSError: [Errno 37] No locks available
```
According to error log, it is OSError, but there is an `except` in the `_acquire` function.
```
def _acquire(self):
open_mode = os.O_WRONLY | os.O_CREAT | os.O_EXCL | os.O_TRUNC
try:
fd = os.open(self._lock_file, open_mode)
except (IOError, OSError):
pass
else:
self._lock_file_fd = fd
return None
```
I don't know why it stucked rather than `pass` directly.
I am not quite familiar with filelock operation, so any help is highly appriciated.
## Steps to reproduce the bug
```python
ds = load_dataset('xsum')
```
## Expected results
A clear and concise description of the expected results.
## Actual results
```
>>> ds=load_dataset('xsum')
^CTraceback (most recent call last):
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
OSError: [Errno 37] No locks available
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/load.py", line 818, in load_dataset
use_auth_token=use_auth_token,
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/load.py", line 470, in prepare_module
with FileLock(lock_path):
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "/user/HS502/yl02706/.conda/envs/lyc/lib/python3.6/site-packages/datasets/utils/filelock.py", line 402, in _acquire
fcntl.flock(fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
KeyboardInterrupt
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Linux-4.15.0-135-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.13
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2618/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2618/timeline | null | null | null | null | false | [
"Hi @liyucheng09, thanks for reporting.\r\n\r\nApparently this issue has to do with your environment setup. One question: is your data in an NFS share? Some people have reported this error when using `fcntl` to write to an NFS share... If this is the case, then it might be that your NFS just may not be set up to provide file locks. You should ask your system administrator, or try these commands in the terminal:\r\n```shell\r\nsudo systemctl enable rpc-statd\r\nsudo systemctl start rpc-statd\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/1288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1288/comments | https://api.github.com/repos/huggingface/datasets/issues/1288/events | https://github.com/huggingface/datasets/pull/1288 | 759,309,457 | MDExOlB1bGxSZXF1ZXN0NTM0MzM2Mzgz | 1,288 | Add CodeSearchNet corpus dataset | [] | closed | false | null | 1 | 2020-12-08T10:07:50Z | 2020-12-09T17:05:28Z | 2020-12-09T17:05:28Z | null | This PR adds the CodeSearchNet corpus proxy dataset for semantic code search: https://github.com/github/CodeSearchNet
I have had a few issues, mentioned below. Would appreciate some help on how to solve them.
## Issues generating dataset card
Is there something wrong with my declaration of the dataset features ?
```
features=datasets.Features(
{
"repository_name": datasets.Value("string"),
"func_path_in_repository": datasets.Value("string"),
"func_name": datasets.Value("string"),
"whole_func_string": datasets.Value("string"),
"language": datasets.Value("string"),
"func_code_string": datasets.Value("string"),
"func_code_tokens": datasets.Sequence(datasets.Value("string")),
"func_documentation_string": datasets.Value("string"),
"func_documentation_tokens": datasets.Sequence(datasets.Value("string")),
"split_name": datasets.Value("string"),
"func_code_url": datasets.Value("string"),
# TODO - add licensing info in the examples
}
),
```
When running the streamlite app for tagging the dataset on my machine, I get the following error :

## Issues with dummy data
Due to the unusual structure of the data, I have been unable to generate dummy data automatically.
I tried to generate it manually, but pytests fail when using the manually-generated dummy data ! Pytests work fine when using the real data.
```
============================================================================================== test session starts ==============================================================================================
platform linux -- Python 3.7.9, pytest-6.1.2, py-1.9.0, pluggy-0.13.1
plugins: xdist-2.1.0, forked-1.3.0
collected 1 item
tests/test_dataset_common.py F [100%]
=================================================================================================== FAILURES ====================================================================================================
________________________________________________________________________ LocalDatasetTest.test_load_dataset_all_configs_code_search_net _________________________________________________________________________
self = <tests.test_dataset_common.LocalDatasetTest testMethod=test_load_dataset_all_configs_code_search_net>, dataset_name = 'code_search_net'
@slow
def test_load_dataset_all_configs(self, dataset_name):
configs = self.dataset_tester.load_all_configs(dataset_name, is_local=True)
> self.dataset_tester.check_load_dataset(dataset_name, configs, is_local=True)
tests/test_dataset_common.py:237:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_dataset_common.py:198: in check_load_dataset
self.parent.assertTrue(len(dataset[split]) > 0)
E AssertionError: False is not true
--------------------------------------------------------------------------------------------- Captured stdout call ----------------------------------------------------------------------------------------------
Downloading and preparing dataset code_search_net/all (download: 1.00 MiB, generated: 1.00 MiB, post-processed: Unknown size, total: 2.00 MiB) to /tmp/tmppx78sj24/code_search_net/all/1.0.0...
Dataset code_search_net downloaded and prepared to /tmp/tmppx78sj24/code_search_net/all/1.0.0. Subsequent calls will reuse this data.
--------------------------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------------------------
... (irrelevant info - Deprecation warnings)
============================================================================================ short test summary info ============================================================================================
FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_code_search_net - AssertionError: False is not true
========================================================================================= 1 failed, 4 warnings in 3.00s ========================================================================================
```
## Note : Data structure in S3
The data is stored on S3, and organized by programming languages.
It is stored in the following repository structure:
```
.
├── <language_name> # e.g. python
│ └── final
│ └── jsonl
│ ├── test
│ │ └── <language_name>_test_0.jsonl.gz
│ ├── train
│ │ ├── <language_name>_train_0.jsonl.gz
│ │ ├── <language_name>_train_1.jsonl.gz
│ │ ├── ...
│ │ └── <language_name>_train_n.jsonl.gz
│ └── valid
│ └── <language_name>_valid_0.jsonl.gz
├── <language_name>_dedupe_definitions_v2.pkl
└── <language_name>_licenses.pkl
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1288/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1288/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1288.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1288",
"merged_at": "2020-12-09T17:05:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1288.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1288"
} | true | [
"@lhoestq ready for a second review"
] |
https://api.github.com/repos/huggingface/datasets/issues/3056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3056/comments | https://api.github.com/repos/huggingface/datasets/issues/3056/events | https://github.com/huggingface/datasets/pull/3056 | 1,022,345,564 | PR_kwDODunzps4tAB9h | 3,056 | Fix meteor metric for version >= 3.6.4 | [] | closed | false | null | 0 | 2021-10-11T07:11:44Z | 2021-10-11T07:29:20Z | 2021-10-11T07:29:19Z | null | After `nltk` update, the meteor metric expects pre-tokenized inputs (breaking change).
This PR fixes this issue, while maintaining compatibility with older versions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3056/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3056/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3056.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3056",
"merged_at": "2021-10-11T07:29:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3056.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3056"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2150/comments | https://api.github.com/repos/huggingface/datasets/issues/2150/events | https://github.com/huggingface/datasets/pull/2150 | 844,776,448 | MDExOlB1bGxSZXF1ZXN0NjAzOTg3OTcx | 2,150 | Allow pickling of big in-memory tables | [] | closed | false | null | 0 | 2021-03-30T15:51:56Z | 2021-03-31T10:37:15Z | 2021-03-31T10:37:14Z | null | This should fix issue #2134
Pickling is limited to <4GiB objects, it's not possible to pickle a big arrow table (for multiprocessing for example).
For big tables, we have to write them on disk and only pickle the path to the table. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2150/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2150/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2150.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2150",
"merged_at": "2021-03-31T10:37:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2150.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2150"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2943/comments | https://api.github.com/repos/huggingface/datasets/issues/2943/events | https://github.com/huggingface/datasets/issues/2943 | 1,000,355,115 | I_kwDODunzps47oDUr | 2,943 | Backwards compatibility broken for cached datasets that use `.filter()` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 6 | 2021-09-19T16:16:37Z | 2021-09-20T16:25:43Z | 2021-09-20T16:25:42Z | null | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2943/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2943/timeline | null | completed | null | null | false | [
"Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.\r\nTo avoid other users from having this issue we could make the caching differentiate the two, what do you think ?",
"If it's easy enough to implement, then yes please 😄 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests.",
"Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR",
"I just merged a fix, let me know if you're still having this kind of issues :)\r\n\r\nWe'll do a release soon to make this fix available",
"Definitely works on several manual cases with our dummy datasets, thank you @lhoestq !",
"Fixed by #2947."
] |
https://api.github.com/repos/huggingface/datasets/issues/2145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2145/comments | https://api.github.com/repos/huggingface/datasets/issues/2145/events | https://github.com/huggingface/datasets/pull/2145 | 844,603,518 | MDExOlB1bGxSZXF1ZXN0NjAzODMxOTE2 | 2,145 | Implement Dataset add_column | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"closed_at": "2021-05-31T16:20:53Z",
"closed_issues": 3,
"created_at": "2021-04-09T13:16:31Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-05-14T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/3",
"id": 6644287,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==",
"number": 3,
"open_issues": 0,
"state": "closed",
"title": "1.7",
"updated_at": "2021-05-31T16:20:53Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/3"
} | 1 | 2021-03-30T14:02:14Z | 2021-04-29T14:50:44Z | 2021-04-29T14:50:43Z | null | Implement `Dataset.add_column`.
Close #1954. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2145/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2145/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2145.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2145",
"merged_at": "2021-04-29T14:50:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2145.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2145"
} | true | [
"#2274 has been merged. You can now merge master into this branch and use `assert_arrow_metadata_are_synced_with_dataset_features(dset)` to make sure that the metadata are good :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1645/comments | https://api.github.com/repos/huggingface/datasets/issues/1645/events | https://github.com/huggingface/datasets/pull/1645 | 775,473,106 | MDExOlB1bGxSZXF1ZXN0NTQ2MTQ4OTUx | 1,645 | Rename "part-of-speech-tagging" tag in some dataset cards | [] | closed | false | null | 0 | 2020-12-28T16:09:09Z | 2021-01-07T10:08:14Z | 2021-01-07T10:08:13Z | null | `part-of-speech-tagging` was not part of the tagging taxonomy under `structure-prediction` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1645/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1645/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1645.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1645",
"merged_at": "2021-01-07T10:08:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1645.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1645"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/879/comments | https://api.github.com/repos/huggingface/datasets/issues/879/events | https://github.com/huggingface/datasets/issues/879 | 748,848,847 | MDU6SXNzdWU3NDg4NDg4NDc= | 879 | boolq does not load | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 3 | 2020-11-23T14:28:28Z | 2022-10-05T12:23:32Z | 2022-10-05T12:23:32Z | null | Hi
I am getting these errors trying to load boolq thanks
Traceback (most recent call last):
File "test.py", line 5, in <module>
data = AutoTask().get("boolq").get_dataset("train", n_obs=10)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset
dataset = self.load_dataset(split=split)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset
return datasets.load_dataset(self.task.name, split=split)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators
downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom
get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache
f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/879/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/879/timeline | null | completed | null | null | false | [
"Hi ! It runs on my side without issues. I tried\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"boolq\")\r\n```\r\n\r\nWhat version of datasets and tensorflow are your runnning ?\r\nAlso if you manage to get a minimal reproducible script (on google colab for example) that would be useful.",
"hey\ni do the exact same commands. for me it fails i guess might be issues with\ncaching maybe?\nthanks\nbest\nrabeeh\n\nOn Tue, Nov 24, 2020, 10:24 AM Quentin Lhoest <[email protected]>\nwrote:\n\n> Hi ! It runs on my side without issues. I tried\n>\n> from datasets import load_datasetload_dataset(\"boolq\")\n>\n> What version of datasets and tensorflow are your runnning ?\n> Also if you manage to get a minimal reproducible script (on google colab\n> for example) that would be useful.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/879#issuecomment-732769114>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/ABP4ZCGGDR2FUMRKZTIY5CTSRN3VXANCNFSM4T7R3U6A>\n> .\n>\n",
"Could you check if it works on the master branch ?\r\nYou can use `load_dataset(\"boolq\", script_version=\"master\")` to do so.\r\nWe did some changes recently in boolq to remove the TF dependency and we changed the way the data files are downloaded in https://github.com/huggingface/datasets/pull/881"
] |
https://api.github.com/repos/huggingface/datasets/issues/5299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5299/comments | https://api.github.com/repos/huggingface/datasets/issues/5299/events | https://github.com/huggingface/datasets/pull/5299 | 1,464,695,091 | PR_kwDODunzps5Dt3Sk | 5,299 | Fix xopen for Windows pathnames | [] | closed | false | null | 1 | 2022-11-25T15:35:28Z | 2022-11-29T08:23:58Z | 2022-11-29T08:21:24Z | null | This PR fixes a bug in `xopen` function for Windows pathnames.
Fix #5298. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5299/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5299/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5299.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5299",
"merged_at": "2022-11-29T08:21:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5299.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5299"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3989/comments | https://api.github.com/repos/huggingface/datasets/issues/3989/events | https://github.com/huggingface/datasets/pull/3989 | 1,176,955,078 | PR_kwDODunzps400l1S | 3,989 | Remove old wikipedia leftovers | [] | closed | false | null | 3 | 2022-03-22T15:25:46Z | 2022-03-31T15:35:26Z | 2022-03-31T15:30:16Z | null | After updating Wikipedia dataset, remove old wikipedia leftovers from doc.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3989/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3989/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3989.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3989",
"merged_at": "2022-03-31T15:30:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3989.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3989"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> This makes me think we shouldn't advise the use of load_dataset in dataset scripts, since it doesn't guarantee that the cache will work as expected (the cache directory is not set correctly, and the required disk space for downloaded files is not recorded)\r\n\r\n@lhoestq, do you think it could be a good idea to add a comment in this script WARNING that using load_dataset in a script is not good practice and that people should avoid using that script as a template to create other scripts? ",
"good idea ! :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3488/comments | https://api.github.com/repos/huggingface/datasets/issues/3488/events | https://github.com/huggingface/datasets/issues/3488 | 1,089,345,653 | I_kwDODunzps5A7hh1 | 3,488 | URL query parameters are set as path in the compression hop for fsspec | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 1 | 2021-12-27T16:29:00Z | 2022-01-05T15:15:25Z | null | null | ## Describe the bug
There is an ssue with `StreamingDownloadManager._extract`.
I don't know how the test `test_streaming_gg_drive_gzipped` passes:
For
```python
TEST_GG_DRIVE_GZIPPED_URL = "https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz"
urlpath = StreamingDownloadManager().download_and_extract(TEST_GG_DRIVE_GZIPPED_URL)
```
gives `urlpath`:
```python
'gzip://uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz::https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz'
```
The gzip path makes no sense: `gzip://uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz`
## Steps to reproduce the bug
```python
from datasets.utils.streaming_download_manager import StreamingDownloadManager
dl_manager = StreamingDownloadManager()
urlpath = dl_manager.extract("https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz")
print(urlpath)
```
## Expected results
The query parameters should not be set as part of the path.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3488/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3488/timeline | null | null | null | null | false | [
"I think the test passes because it simply ignore what's after `gzip://`.\r\n\r\nThe returned urlpath is expected to look like `gzip://filename::url`, and the filename is currently considered to be what's after the final `/`, hence the result.\r\n\r\nWe can decide to change this and simply have `gzip://::url`, this way we don't need to guess the filename, what do you think ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/5459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5459/comments | https://api.github.com/repos/huggingface/datasets/issues/5459/events | https://github.com/huggingface/datasets/pull/5459 | 1,555,367,504 | PR_kwDODunzps5Icjwe | 5,459 | Disable aiohttp requoting of redirection URL | [] | closed | false | null | 7 | 2023-01-24T17:18:59Z | 2023-02-01T08:45:33Z | 2023-01-31T08:37:54Z | null | The library `aiohttp` performs a requoting of redirection URLs that unquotes the single quotation mark character: `%27` => `'`
This is a problem for our Hugging Face Hub, which requires exact URL from location header.
Specifically, in the query component of the URL (`https://netloc/path?query`), the value for `response-content-disposition` contains `%27`:
```
response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27sample.jsonl.gz%3B+filename%3D%22sample.jsonl.gz%22%3B
```
and after the requoting, the `%27` characters get unquoted to `'`:
```
response-content-disposition=attachment%3B+filename*%3DUTF-8''sample.jsonl.gz%3B+filename%3D%22sample.jsonl.gz%22%3B
```
This PR disables the `aiohttp` requoting of redirection URLs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5459/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5459/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5459.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5459",
"merged_at": "2023-01-31T08:37:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5459.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5459"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Comment by @lhoestq:\r\n> Do you think we need this in `datasets` if it's fixed on the moon landing side ? In the aiohttp doc they consider those symbols as \"non-safe\" ",
"The lib `requests` does not perform that requote on redirect URLs.",
"Indeed, the `requests` library does perform a requoting, but this does not unquote `%27`:\r\n```python\r\nIn [1]: from requests.utils import requote_uri\r\n\r\nIn [2]: url = \"https://netloc/path?param=param%27%27value\"\r\n\r\nIn [3]: url\r\nOut[3]: 'https://netloc/path?param=param%27%27value'\r\n\r\nIn [4]: requote_uri(url)\r\nOut[4]: 'https://netloc/path?param=param%27%27value'\r\n```\r\n\r\nHowever, the `aiohttp` library uses `yarl.ULR` and this does unquote `%27`:\r\n```python\r\nIn [5]: from yarl import URL\r\n\r\nIn [6]: url\r\nOut[6]: 'https://netloc/path?param=param%27%27value'\r\n\r\nIn [7]: str(URL(url))\r\nOut[7]: \"https://netloc/path?param=param''value\"\r\n```\r\n\r\nIf we pass `requote_redirect_url=False` to `aiohttp`, then it passes `encoded=True` to `yarl.ULR`: https://github.com/aio-libs/aiohttp/blob/4635161ee8e7ad321cca46e01ce5bfeb1ad8bf26/aiohttp/client.py#L578-L580\r\n```python\r\nparsed_url = URL(\r\n r_url, encoded=not self._requote_redirect_url\r\n)\r\n```\r\nwhich does not unquote `%27`:\r\n```python\r\nIn [8]: url\r\nOut[8]: 'https://netloc/path?param=param%27%27value'\r\n\r\nIn [9]: str(URL(url, encoded=True))\r\nOut[9]: 'https://netloc/path?param=param%27%27value'\r\n```",
"See the issues we opened in the respective libraries:\r\n- aiohttp\r\n - aio-libs/aiohttp#7183\r\n- requests\r\n - psf/requests#6341",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012399 / 0.011353 (0.001047) | 0.006388 / 0.011008 (-0.004620) | 0.134173 / 0.038508 (0.095665) | 0.037059 / 0.023109 (0.013949) | 0.420697 / 0.275898 (0.144799) | 0.473981 / 0.323480 (0.150502) | 0.009857 / 0.007986 (0.001871) | 0.004791 / 0.004328 (0.000463) | 0.106886 / 0.004250 (0.102636) | 0.044871 / 0.037052 (0.007818) | 0.429843 / 0.258489 (0.171354) | 0.461569 / 0.293841 (0.167728) | 0.057285 / 0.128546 (-0.071261) | 0.018809 / 0.075646 (-0.056837) | 0.432613 / 0.419271 (0.013342) | 0.058086 / 0.043533 (0.014553) | 0.413064 / 0.255139 (0.157925) | 0.444407 / 0.283200 (0.161207) | 0.119102 / 0.141683 (-0.022581) | 1.875954 / 1.452155 (0.423799) | 1.916392 / 1.492716 (0.423676) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267489 / 0.018006 (0.249483) | 0.567554 / 0.000490 (0.567064) | 0.005901 / 0.000200 (0.005701) | 0.000134 / 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031248 / 0.037411 (-0.006164) | 0.123014 / 0.014526 (0.108489) | 0.140001 / 0.176557 (-0.036556) | 0.191476 / 0.737135 (-0.545659) | 0.141687 / 0.296338 (-0.154652) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.637481 / 0.215209 (0.422272) | 6.255969 / 2.077655 (4.178314) | 2.559811 / 1.504120 (1.055691) | 2.118154 / 1.541195 (0.576960) | 2.079487 / 1.468490 (0.610997) | 1.201079 / 4.584777 (-3.383698) | 5.592625 / 3.745712 (1.846913) | 5.143344 / 5.269862 (-0.126517) | 2.764716 / 4.565676 (-1.800960) | 0.142539 / 0.424275 (-0.281736) | 0.015541 / 0.007607 (0.007934) | 0.771407 / 0.226044 (0.545363) | 7.631657 / 2.268929 (5.362728) | 3.279684 / 55.444624 (-52.164940) | 2.587566 / 6.876477 (-4.288911) | 2.624622 / 2.142072 (0.482549) | 1.427878 / 4.805227 (-3.377350) | 0.257759 / 6.500664 (-6.242906) | 0.078616 / 0.075469 (0.003147) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.609305 / 1.841788 (-0.232483) | 18.258792 / 8.074308 (10.184484) | 20.345242 / 10.191392 (10.153850) | 0.267366 / 0.680424 (-0.413058) | 0.047035 / 0.534201 (-0.487166) | 0.568881 / 0.579283 (-0.010402) | 0.662763 / 0.434364 (0.228399) | 0.668927 / 0.540337 (0.128590) | 0.755766 / 1.386936 (-0.631170) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010017 / 0.011353 (-0.001336) | 0.006816 / 0.011008 (-0.004192) | 0.105038 / 0.038508 (0.066529) | 0.038689 / 0.023109 (0.015580) | 0.482113 / 0.275898 (0.206215) | 0.540072 / 0.323480 (0.216592) | 0.007738 / 0.007986 (-0.000248) | 0.005134 / 0.004328 (0.000806) | 0.102203 / 0.004250 (0.097953) | 0.054080 / 0.037052 (0.017028) | 0.501057 / 0.258489 (0.242568) | 0.567186 / 0.293841 (0.273345) | 0.060330 / 0.128546 (-0.068217) | 0.020059 / 0.075646 (-0.055587) | 0.123102 / 0.419271 (-0.296170) | 0.063426 / 0.043533 (0.019893) | 0.494171 / 0.255139 (0.239032) | 0.538238 / 0.283200 (0.255039) | 0.119613 / 0.141683 (-0.022069) | 1.853728 / 1.452155 (0.401574) | 1.984621 / 1.492716 (0.491904) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282511 / 0.018006 (0.264505) | 0.563190 / 0.000490 (0.562700) | 0.000465 / 0.000200 (0.000265) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029267 / 0.037411 (-0.008144) | 0.135618 / 0.014526 (0.121093) | 0.146286 / 0.176557 (-0.030271) | 0.188570 / 0.737135 (-0.548565) | 0.155839 / 0.296338 (-0.140499) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.671660 / 0.215209 (0.456451) | 6.718775 / 2.077655 (4.641120) | 3.004601 / 1.504120 (1.500481) | 2.640504 / 1.541195 (1.099309) | 2.666788 / 1.468490 (1.198298) | 1.242655 / 4.584777 (-3.342122) | 5.780119 / 3.745712 (2.034407) | 3.247935 / 5.269862 (-2.021927) | 2.114007 / 4.565676 (-2.451669) | 0.147546 / 0.424275 (-0.276729) | 0.014408 / 0.007607 (0.006801) | 0.824407 / 0.226044 (0.598362) | 8.278185 / 2.268929 (6.009257) | 3.733463 / 55.444624 (-51.711161) | 2.976732 / 6.876477 (-3.899745) | 3.132758 / 2.142072 (0.990686) | 1.446095 / 4.805227 (-3.359132) | 0.258628 / 6.500664 (-6.242036) | 0.085513 / 0.075469 (0.010043) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.702681 / 1.841788 (-0.139106) | 18.725123 / 8.074308 (10.650815) | 19.622808 / 10.191392 (9.431416) | 0.215845 / 0.680424 (-0.464579) | 0.029246 / 0.534201 (-0.504955) | 0.554819 / 0.579283 (-0.024464) | 0.630926 / 0.434364 (0.196562) | 0.637663 / 0.540337 (0.097325) | 0.837948 / 1.386936 (-0.548988) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008540 / 0.011353 (-0.002813) | 0.004538 / 0.011008 (-0.006470) | 0.101507 / 0.038508 (0.062999) | 0.029751 / 0.023109 (0.006641) | 0.292608 / 0.275898 (0.016710) | 0.354734 / 0.323480 (0.031254) | 0.007430 / 0.007986 (-0.000556) | 0.003365 / 0.004328 (-0.000964) | 0.078703 / 0.004250 (0.074452) | 0.034858 / 0.037052 (-0.002194) | 0.303518 / 0.258489 (0.045029) | 0.336523 / 0.293841 (0.042682) | 0.033741 / 0.128546 (-0.094805) | 0.011460 / 0.075646 (-0.064186) | 0.319551 / 0.419271 (-0.099721) | 0.041102 / 0.043533 (-0.002431) | 0.295914 / 0.255139 (0.040775) | 0.322142 / 0.283200 (0.038943) | 0.084694 / 0.141683 (-0.056989) | 1.481308 / 1.452155 (0.029153) | 1.530271 / 1.492716 (0.037554) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180516 / 0.018006 (0.162510) | 0.405741 / 0.000490 (0.405251) | 0.002806 / 0.000200 (0.002606) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023359 / 0.037411 (-0.014052) | 0.096950 / 0.014526 (0.082424) | 0.103991 / 0.176557 (-0.072566) | 0.143700 / 0.737135 (-0.593435) | 0.106764 / 0.296338 (-0.189575) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416966 / 0.215209 (0.201757) | 4.145601 / 2.077655 (2.067946) | 1.838258 / 1.504120 (0.334139) | 1.629396 / 1.541195 (0.088201) | 1.649707 / 1.468490 (0.181217) | 0.689624 / 4.584777 (-3.895153) | 3.414584 / 3.745712 (-0.331129) | 1.874295 / 5.269862 (-3.395566) | 1.251930 / 4.565676 (-3.313746) | 0.081782 / 0.424275 (-0.342493) | 0.012868 / 0.007607 (0.005261) | 0.523904 / 0.226044 (0.297859) | 5.251032 / 2.268929 (2.982104) | 2.301549 / 55.444624 (-53.143075) | 1.942110 / 6.876477 (-4.934367) | 2.023014 / 2.142072 (-0.119058) | 0.816492 / 4.805227 (-3.988736) | 0.150107 / 6.500664 (-6.350558) | 0.065118 / 0.075469 (-0.010351) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226433 / 1.841788 (-0.615355) | 13.852569 / 8.074308 (5.778261) | 13.862779 / 10.191392 (3.671387) | 0.146361 / 0.680424 (-0.534062) | 0.028652 / 0.534201 (-0.505549) | 0.398251 / 0.579283 (-0.181032) | 0.403590 / 0.434364 (-0.030774) | 0.492184 / 0.540337 (-0.048154) | 0.581040 / 1.386936 (-0.805896) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006859 / 0.011353 (-0.004494) | 0.004632 / 0.011008 (-0.006376) | 0.076653 / 0.038508 (0.038145) | 0.027865 / 0.023109 (0.004755) | 0.354472 / 0.275898 (0.078573) | 0.385462 / 0.323480 (0.061982) | 0.005125 / 0.007986 (-0.002861) | 0.003420 / 0.004328 (-0.000909) | 0.076018 / 0.004250 (0.071768) | 0.040197 / 0.037052 (0.003144) | 0.353675 / 0.258489 (0.095186) | 0.394911 / 0.293841 (0.101070) | 0.032909 / 0.128546 (-0.095637) | 0.011713 / 0.075646 (-0.063933) | 0.085921 / 0.419271 (-0.333350) | 0.044462 / 0.043533 (0.000929) | 0.349997 / 0.255139 (0.094858) | 0.375207 / 0.283200 (0.092008) | 0.091288 / 0.141683 (-0.050394) | 1.536515 / 1.452155 (0.084361) | 1.581878 / 1.492716 (0.089162) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.273284 / 0.018006 (0.255277) | 0.424457 / 0.000490 (0.423967) | 0.044659 / 0.000200 (0.044459) | 0.000247 / 0.000054 (0.000192) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025473 / 0.037411 (-0.011938) | 0.100014 / 0.014526 (0.085488) | 0.108551 / 0.176557 (-0.068006) | 0.147913 / 0.737135 (-0.589223) | 0.112729 / 0.296338 (-0.183610) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448162 / 0.215209 (0.232953) | 4.472701 / 2.077655 (2.395046) | 2.078384 / 1.504120 (0.574264) | 1.861292 / 1.541195 (0.320097) | 1.920482 / 1.468490 (0.451991) | 0.706968 / 4.584777 (-3.877809) | 3.433109 / 3.745712 (-0.312603) | 1.898684 / 5.269862 (-3.371178) | 1.174375 / 4.565676 (-3.391302) | 0.083666 / 0.424275 (-0.340609) | 0.012388 / 0.007607 (0.004781) | 0.546011 / 0.226044 (0.319966) | 5.487514 / 2.268929 (3.218585) | 2.534124 / 55.444624 (-52.910500) | 2.168441 / 6.876477 (-4.708036) | 2.203458 / 2.142072 (0.061386) | 0.813333 / 4.805227 (-3.991894) | 0.153169 / 6.500664 (-6.347495) | 0.067151 / 0.075469 (-0.008318) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277815 / 1.841788 (-0.563972) | 13.920545 / 8.074308 (5.846237) | 13.473801 / 10.191392 (3.282409) | 0.129035 / 0.680424 (-0.551389) | 0.016737 / 0.534201 (-0.517464) | 0.388413 / 0.579283 (-0.190870) | 0.388785 / 0.434364 (-0.045579) | 0.481735 / 0.540337 (-0.058602) | 0.576390 / 1.386936 (-0.810546) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1011/comments | https://api.github.com/repos/huggingface/datasets/issues/1011/events | https://github.com/huggingface/datasets/pull/1011 | 755,463,726 | MDExOlB1bGxSZXF1ZXN0NTMxMTY5MjA3 | 1,011 | Add Bilingual Corpus of Arabic-English Parallel Tweets | [] | closed | false | null | 6 | 2020-12-02T17:20:02Z | 2020-12-04T14:45:10Z | 2020-12-04T14:44:33Z | null | Added Bilingual Corpus of Arabic-English Parallel Tweets. The link to the dataset can be found [here](https://alt.qcri.org/wp-content/uploads/2020/08/Bilingual-Corpus-of-Arabic-English-Parallel-Tweets.zip) and the paper can be found [here](https://www.aclweb.org/anthology/2020.bucc-1.3.pdf)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1011/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1011/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1011.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1011",
"merged_at": "2020-12-04T14:44:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1011.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1011"
} | true | [
"IMO, the problem with this dataset is that it is not really a text/nlp dataset. These are just collections of tweet ids. So, ultimately, one needs to crawl twitter to get the actual text.",
"That's true.\r\n\r\n",
"at least it's clear in the description that one needs to collect the tweets : \r\n```\r\nThis resource is a result of a generic method for collecting parallel tweets.\r\n```",
"Looks like this is failing for other datasets. Should I rebase it and push again?\r\nAlso rebasing and pushing is reflecting changes in many other files (ultimately forcing me to open a new branch and a new PR) any way to avoid this?",
"No let me merge this one directly, it's fine",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/5115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5115/comments | https://api.github.com/repos/huggingface/datasets/issues/5115/events | https://github.com/huggingface/datasets/pull/5115 | 1,409,250,020 | PR_kwDODunzps5Az9Pm | 5,115 | Fix iter_batches | [] | closed | false | null | 3 | 2022-10-14T12:06:14Z | 2022-10-14T15:02:15Z | 2022-10-14T14:59:58Z | null | The `pa.Table.to_reader()` method available in `pyarrow>=8.0.0` may return chunks of size < `max_chunksize`, therefore `iter_batches` can return batches smaller than the `batch_size` specified by the user
Therefore batched `map` couldn't always use batches of the right size, e.g. this fails because it runs only on one batch of one element:
```python
from datasets import Dataset, concatenate_datasets
ds = concatenate_datasets([Dataset.from_dict({"a": [i]}) for i in range(10)])
ds2 = ds.map(lambda _: {}, batched=True)
assert list(ds2) == list(ds)
```
This was introduced in https://github.com/huggingface/datasets/pull/5030
Close https://github.com/huggingface/datasets/issues/5111
This will require a patch release along with https://github.com/huggingface/datasets/pull/5113
TODO:
- [x] fix tests
- [x] add more tests | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5115/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5115/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5115.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5115",
"merged_at": "2022-10-14T14:59:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5115.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5115"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I also ran the code in https://github.com/huggingface/datasets/issues/5111 and it works fine now :)",
"This is ready for review :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/551 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/551/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/551/comments | https://api.github.com/repos/huggingface/datasets/issues/551/events | https://github.com/huggingface/datasets/pull/551 | 690,034,762 | MDExOlB1bGxSZXF1ZXN0NDc2OTkwNjAw | 551 | added HANS dataset | [] | closed | false | null | 0 | 2020-09-01T10:42:02Z | 2020-09-01T12:17:10Z | 2020-09-01T12:17:10Z | null | Adds the [HANS](https://github.com/tommccoy1/hans) dataset to evaluate NLI systems. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/551/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/551/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/551.diff",
"html_url": "https://github.com/huggingface/datasets/pull/551",
"merged_at": "2020-09-01T12:17:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/551.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/551"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4840/comments | https://api.github.com/repos/huggingface/datasets/issues/4840/events | https://github.com/huggingface/datasets/issues/4840 | 1,337,342,672 | I_kwDODunzps5PtjrQ | 4,840 | Dataset Viewer issue for darragh/demo_data_raw3 | [] | open | false | null | 5 | 2022-08-12T15:22:58Z | 2022-09-08T07:55:44Z | null | null | ### Link
https://huggingface.co/datasets/darragh/demo_data_raw3
### Description
```
Exception: ValueError
Message: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.
```
reported by @NielsRogge
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4840/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4840/timeline | null | null | null | null | false | [
"do you have an idea of why it can occur @huggingface/datasets? The dataset consists of a single parquet file.",
"Thanks for reporting @severo.\r\n\r\nI'm not able to reproduce that error. I get instead:\r\n```\r\nFileNotFoundError: [Errno 2] No such file or directory: 'orix/data/ChiSig/唐合乐-9-3.jpg'\r\n```\r\n\r\nWhich pyarrow version are you using? Mine is 6.0.1. ",
"OK, I get now your error when not streaming.",
"OK!\r\n\r\nIf it's useful, the pyarrow version is 7.0.0:\r\n\r\nhttps://github.com/huggingface/datasets-server/blob/487c39d87998f8d5a35972f1027d6c8e588e622d/services/worker/poetry.lock#L1537-L1543",
"Apparently, there is something weird with that Parquet file: its schema is:\r\n```\r\nimages: extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>>\r\n```\r\n\r\nI have forced a right schema:\r\n```python\r\nfrom datasets import Features, Image, load_dataset\r\n\r\nfeatures = Features({\"images\": Image()})\r\nds = datasets.load_dataset(\"parquet\", split=\"train\", data_files=\"train-00000-of-00001.parquet\", features=features)\r\n```\r\nand then recreated a new Parquet file:\r\n```python\r\nds.to_parquet(\"train.parquet\")\r\n```\r\n\r\nNow this Parquet file has the right schema:\r\n```\r\nimages: struct<bytes: binary, path: string>\r\n child 0, bytes: binary\r\n child 1, path: string\r\n```\r\nand can be loaded normally:\r\n```python\r\nIn [26]: ds = load_dataset(\"parquet\", split=\"train\", data_files=\"dataset.parquet\")\r\nn [27]: ds\r\nOut[27]: \r\nDataset({\r\n features: ['images'],\r\n num_rows: 20\r\n})\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/5395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5395/comments | https://api.github.com/repos/huggingface/datasets/issues/5395/events | https://github.com/huggingface/datasets/pull/5395 | 1,513,997,335 | PR_kwDODunzps5GXLUl | 5,395 | Temporarily pin pydantic test dependency | [] | closed | false | null | 3 | 2022-12-29T19:34:19Z | 2022-12-30T06:36:57Z | 2022-12-29T21:00:26Z | null | Temporarily pin `pydantic` until a permanent solution is found.
Fix #5394. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5395/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5395/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5395.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5395",
"merged_at": "2022-12-29T21:00:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5395.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5395"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012220 / 0.011353 (0.000867) | 0.005943 / 0.011008 (-0.005065) | 0.128223 / 0.038508 (0.089715) | 0.037352 / 0.023109 (0.014242) | 0.397143 / 0.275898 (0.121245) | 0.483935 / 0.323480 (0.160455) | 0.010279 / 0.007986 (0.002293) | 0.004842 / 0.004328 (0.000513) | 0.101403 / 0.004250 (0.097153) | 0.042935 / 0.037052 (0.005883) | 0.421642 / 0.258489 (0.163153) | 0.456328 / 0.293841 (0.162487) | 0.065639 / 0.128546 (-0.062907) | 0.019820 / 0.075646 (-0.055826) | 0.426090 / 0.419271 (0.006818) | 0.069583 / 0.043533 (0.026051) | 0.402662 / 0.255139 (0.147523) | 0.428826 / 0.283200 (0.145626) | 0.116760 / 0.141683 (-0.024923) | 1.806216 / 1.452155 (0.354061) | 1.852629 / 1.492716 (0.359913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226555 / 0.018006 (0.208548) | 0.584693 / 0.000490 (0.584203) | 0.008612 / 0.000200 (0.008412) | 0.000205 / 0.000054 (0.000150) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028393 / 0.037411 (-0.009018) | 0.123355 / 0.014526 (0.108829) | 0.134423 / 0.176557 (-0.042133) | 0.188536 / 0.737135 (-0.548600) | 0.141595 / 0.296338 (-0.154743) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.589359 / 0.215209 (0.374150) | 5.974655 / 2.077655 (3.897001) | 2.465580 / 1.504120 (0.961460) | 2.007618 / 1.541195 (0.466424) | 2.078788 / 1.468490 (0.610298) | 1.216646 / 4.584777 (-3.368131) | 5.217516 / 3.745712 (1.471804) | 3.107188 / 5.269862 (-2.162674) | 2.251641 / 4.565676 (-2.314036) | 0.138640 / 0.424275 (-0.285635) | 0.015046 / 0.007607 (0.007439) | 0.780092 / 0.226044 (0.554048) | 7.749564 / 2.268929 (5.480635) | 3.080708 / 55.444624 (-52.363917) | 2.393897 / 6.876477 (-4.482579) | 2.387738 / 2.142072 (0.245665) | 1.458844 / 4.805227 (-3.346384) | 0.252476 / 6.500664 (-6.248188) | 0.076594 / 0.075469 (0.001125) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.540868 / 1.841788 (-0.300919) | 17.295684 / 8.074308 (9.221376) | 19.669300 / 10.191392 (9.477908) | 0.250315 / 0.680424 (-0.430109) | 0.045068 / 0.534201 (-0.489133) | 0.538840 / 0.579283 (-0.040443) | 0.584443 / 0.434364 (0.150079) | 0.614476 / 0.540337 (0.074138) | 0.729928 / 1.386936 (-0.657008) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009218 / 0.011353 (-0.002135) | 0.006261 / 0.011008 (-0.004747) | 0.125541 / 0.038508 (0.087033) | 0.034405 / 0.023109 (0.011296) | 0.468381 / 0.275898 (0.192483) | 0.503336 / 0.323480 (0.179856) | 0.006839 / 0.007986 (-0.001146) | 0.004724 / 0.004328 (0.000396) | 0.097875 / 0.004250 (0.093625) | 0.051278 / 0.037052 (0.014225) | 0.473323 / 0.258489 (0.214834) | 0.537392 / 0.293841 (0.243551) | 0.055588 / 0.128546 (-0.072958) | 0.021041 / 0.075646 (-0.054605) | 0.416952 / 0.419271 (-0.002320) | 0.070128 / 0.043533 (0.026595) | 0.465224 / 0.255139 (0.210085) | 0.504678 / 0.283200 (0.221478) | 0.112504 / 0.141683 (-0.029179) | 1.865865 / 1.452155 (0.413710) | 1.988296 / 1.492716 (0.495580) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.314170 / 0.018006 (0.296164) | 0.526726 / 0.000490 (0.526236) | 0.018691 / 0.000200 (0.018491) | 0.000128 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033772 / 0.037411 (-0.003639) | 0.124796 / 0.014526 (0.110270) | 0.134700 / 0.176557 (-0.041856) | 0.190595 / 0.737135 (-0.546541) | 0.143205 / 0.296338 (-0.153133) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.656708 / 0.215209 (0.441499) | 6.470503 / 2.077655 (4.392848) | 2.866430 / 1.504120 (1.362310) | 2.506846 / 1.541195 (0.965651) | 2.548669 / 1.468490 (1.080179) | 1.226695 / 4.584777 (-3.358082) | 5.117866 / 3.745712 (1.372153) | 3.032822 / 5.269862 (-2.237040) | 1.999152 / 4.565676 (-2.566524) | 0.142974 / 0.424275 (-0.281301) | 0.015011 / 0.007607 (0.007404) | 0.799729 / 0.226044 (0.573684) | 8.286313 / 2.268929 (6.017385) | 3.636482 / 55.444624 (-51.808142) | 2.888038 / 6.876477 (-3.988439) | 2.924982 / 2.142072 (0.782910) | 1.471996 / 4.805227 (-3.333231) | 0.257119 / 6.500664 (-6.243545) | 0.077294 / 0.075469 (0.001825) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.608290 / 1.841788 (-0.233497) | 17.599119 / 8.074308 (9.524811) | 18.917086 / 10.191392 (8.725694) | 0.236237 / 0.680424 (-0.444187) | 0.026061 / 0.534201 (-0.508140) | 0.527359 / 0.579283 (-0.051925) | 0.589176 / 0.434364 (0.154812) | 0.602310 / 0.540337 (0.061973) | 0.726756 / 1.386936 (-0.660180) |\n\n</details>\n</details>\n\n\n",
"Issue reported to `pydantic`: \r\n- https://github.com/pydantic/pydantic/issues/4885\r\n\r\nFixing PR at `pydantic`:\r\n- https://github.com/pydantic/pydantic/pull/4886"
] |
https://api.github.com/repos/huggingface/datasets/issues/1494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1494/comments | https://api.github.com/repos/huggingface/datasets/issues/1494/events | https://github.com/huggingface/datasets/pull/1494 | 762,992,601 | MDExOlB1bGxSZXF1ZXN0NTM3NDg2MzU4 | 1,494 | Added Opus Wikipedia | [] | closed | false | null | 1 | 2020-12-11T22:28:03Z | 2020-12-17T14:38:28Z | 2020-12-17T14:38:28Z | null | Dataset : http://opus.nlpl.eu/Wikipedia.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1494/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1494/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1494.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1494",
"merged_at": "2020-12-17T14:38:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1494.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1494"
} | true | [
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/4986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4986/comments | https://api.github.com/repos/huggingface/datasets/issues/4986/events | https://github.com/huggingface/datasets/pull/4986 | 1,375,895,035 | PR_kwDODunzps4_GNSd | 4,986 | [doc] Fix broken snippet that had too many quotes | [] | closed | false | null | 2 | 2022-09-16T12:41:07Z | 2022-09-16T22:12:21Z | 2022-09-16T17:32:14Z | null | Hello!
### Pull request overview
* Fix broken snippet in https://huggingface.co/docs/datasets/main/en/process that has too many quotes
### Details
The snippet in question can be found here: https://huggingface.co/docs/datasets/main/en/process#map
This screenshot shows the issue, there is a quote too many, causing the snippet to be colored incorrectly:

The change speaks for itself.
Thank you for the detailed documentation, by the way.
- Tom Aarsen
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4986/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4986/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4986.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4986",
"merged_at": "2022-09-16T17:32:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4986.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4986"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Spent the day familiarising myself with the huggingface line of products, and happened to run into some small issues here and there. Magically, I've found exactly one small issue in `transformers`, one in `accelerate` and now one in `datasets`, hah!\r\n\r\nAs for this PR, the issue seems solved according to the [new PR documentation](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4986/en/process#map):\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5561/comments | https://api.github.com/repos/huggingface/datasets/issues/5561/events | https://github.com/huggingface/datasets/pull/5561 | 1,593,862,388 | PR_kwDODunzps5Kcxw_ | 5,561 | Add pre-commit config yaml file to enable automatic code formatting | [] | closed | false | null | 6 | 2023-02-21T17:35:07Z | 2023-02-28T15:37:22Z | 2023-02-23T18:23:29Z | null | @huggingface/datasets do you think it would be useful? Motivation - sometimes PRs are like 30% "fix: style" commits :)
If so - I need to double check the config but for me locally it works as expected. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5561/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5561/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5561.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5561",
"merged_at": "2023-02-23T18:23:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5561.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5561"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Better yet have someone enable pre-commit CI https://pre-commit.ci/ and it will apply the pre-commit fixes to the PR automatically as an additional commit.",
"@Skylion007 hi! I agree with @nateraw here, I'd better not force to use pre-commit so I'm not setting it up in the CI for now. And regarding end-of-file - currently it's being done by `black`. \r\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008704 / 0.011353 (-0.002649) | 0.004448 / 0.011008 (-0.006560) | 0.099530 / 0.038508 (0.061022) | 0.029739 / 0.023109 (0.006629) | 0.329267 / 0.275898 (0.053369) | 0.368805 / 0.323480 (0.045325) | 0.006852 / 0.007986 (-0.001133) | 0.004575 / 0.004328 (0.000246) | 0.076838 / 0.004250 (0.072588) | 0.033885 / 0.037052 (-0.003167) | 0.336340 / 0.258489 (0.077851) | 0.384880 / 0.293841 (0.091039) | 0.034051 / 0.128546 (-0.094495) | 0.011638 / 0.075646 (-0.064009) | 0.321650 / 0.419271 (-0.097622) | 0.041202 / 0.043533 (-0.002330) | 0.330841 / 0.255139 (0.075702) | 0.361329 / 0.283200 (0.078130) | 0.084864 / 0.141683 (-0.056819) | 1.454005 / 1.452155 (0.001850) | 1.542167 / 1.492716 (0.049451) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196207 / 0.018006 (0.178200) | 0.400675 / 0.000490 (0.400185) | 0.000403 / 0.000200 (0.000203) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022694 / 0.037411 (-0.014717) | 0.095139 / 0.014526 (0.080613) | 0.104129 / 0.176557 (-0.072427) | 0.168688 / 0.737135 (-0.568447) | 0.109243 / 0.296338 (-0.187096) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427520 / 0.215209 (0.212311) | 4.237726 / 2.077655 (2.160071) | 2.191887 / 1.504120 (0.687767) | 1.987750 / 1.541195 (0.446555) | 1.996540 / 1.468490 (0.528050) | 0.696416 / 4.584777 (-3.888361) | 3.454536 / 3.745712 (-0.291176) | 2.023600 / 5.269862 (-3.246261) | 1.336394 / 4.565676 (-3.229282) | 0.082933 / 0.424275 (-0.341342) | 0.012572 / 0.007607 (0.004965) | 0.534330 / 0.226044 (0.308285) | 5.347588 / 2.268929 (3.078659) | 2.640397 / 55.444624 (-52.804228) | 2.338266 / 6.876477 (-4.538211) | 2.431969 / 2.142072 (0.289897) | 0.821335 / 4.805227 (-3.983893) | 0.151905 / 6.500664 (-6.348759) | 0.067983 / 0.075469 (-0.007486) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.228841 / 1.841788 (-0.612947) | 13.660437 / 8.074308 (5.586128) | 13.729442 / 10.191392 (3.538050) | 0.165835 / 0.680424 (-0.514589) | 0.028753 / 0.534201 (-0.505448) | 0.400143 / 0.579283 (-0.179140) | 0.403714 / 0.434364 (-0.030650) | 0.492168 / 0.540337 (-0.048170) | 0.581151 / 1.386936 (-0.805785) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006289 / 0.011353 (-0.005064) | 0.004419 / 0.011008 (-0.006589) | 0.077220 / 0.038508 (0.038712) | 0.027170 / 0.023109 (0.004060) | 0.344988 / 0.275898 (0.069090) | 0.374150 / 0.323480 (0.050670) | 0.004842 / 0.007986 (-0.003144) | 0.003289 / 0.004328 (-0.001039) | 0.076200 / 0.004250 (0.071950) | 0.036287 / 0.037052 (-0.000766) | 0.345764 / 0.258489 (0.087275) | 0.387439 / 0.293841 (0.093599) | 0.031547 / 0.128546 (-0.096999) | 0.011586 / 0.075646 (-0.064060) | 0.086599 / 0.419271 (-0.332672) | 0.042338 / 0.043533 (-0.001195) | 0.355384 / 0.255139 (0.100246) | 0.369474 / 0.283200 (0.086275) | 0.090945 / 0.141683 (-0.050738) | 1.488632 / 1.452155 (0.036477) | 1.554606 / 1.492716 (0.061890) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212962 / 0.018006 (0.194956) | 0.399647 / 0.000490 (0.399157) | 0.003055 / 0.000200 (0.002856) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024349 / 0.037411 (-0.013062) | 0.100342 / 0.014526 (0.085817) | 0.105657 / 0.176557 (-0.070899) | 0.175139 / 0.737135 (-0.561997) | 0.110014 / 0.296338 (-0.186324) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434785 / 0.215209 (0.219575) | 4.346950 / 2.077655 (2.269295) | 2.045411 / 1.504120 (0.541291) | 1.844258 / 1.541195 (0.303064) | 1.889503 / 1.468490 (0.421013) | 0.704530 / 4.584777 (-3.880247) | 3.362435 / 3.745712 (-0.383277) | 2.797205 / 5.269862 (-2.472656) | 1.504431 / 4.565676 (-3.061245) | 0.083331 / 0.424275 (-0.340945) | 0.012274 / 0.007607 (0.004666) | 0.531123 / 0.226044 (0.305078) | 5.322588 / 2.268929 (3.053660) | 2.483875 / 55.444624 (-52.960750) | 2.147218 / 6.876477 (-4.729258) | 2.164024 / 2.142072 (0.021952) | 0.807191 / 4.805227 (-3.998036) | 0.151189 / 6.500664 (-6.349475) | 0.068027 / 0.075469 (-0.007442) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316001 / 1.841788 (-0.525787) | 13.892785 / 8.074308 (5.818477) | 13.485982 / 10.191392 (3.294590) | 0.138904 / 0.680424 (-0.541520) | 0.016748 / 0.534201 (-0.517453) | 0.379840 / 0.579283 (-0.199443) | 0.384854 / 0.434364 (-0.049510) | 0.464275 / 0.540337 (-0.076063) | 0.553622 / 1.386936 (-0.833314) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009179 / 0.011353 (-0.002174) | 0.005080 / 0.011008 (-0.005929) | 0.099061 / 0.038508 (0.060553) | 0.035252 / 0.023109 (0.012143) | 0.293496 / 0.275898 (0.017598) | 0.360365 / 0.323480 (0.036886) | 0.007757 / 0.007986 (-0.000229) | 0.003985 / 0.004328 (-0.000343) | 0.076021 / 0.004250 (0.071771) | 0.042286 / 0.037052 (0.005233) | 0.316542 / 0.258489 (0.058053) | 0.341711 / 0.293841 (0.047870) | 0.037970 / 0.128546 (-0.090576) | 0.011977 / 0.075646 (-0.063670) | 0.333341 / 0.419271 (-0.085931) | 0.049211 / 0.043533 (0.005678) | 0.297401 / 0.255139 (0.042262) | 0.313424 / 0.283200 (0.030224) | 0.105719 / 0.141683 (-0.035964) | 1.487879 / 1.452155 (0.035724) | 1.529785 / 1.492716 (0.037068) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201062 / 0.018006 (0.183056) | 0.438024 / 0.000490 (0.437534) | 0.002129 / 0.000200 (0.001929) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026422 / 0.037411 (-0.010989) | 0.104863 / 0.014526 (0.090337) | 0.114934 / 0.176557 (-0.061623) | 0.179173 / 0.737135 (-0.557962) | 0.119734 / 0.296338 (-0.176604) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397195 / 0.215209 (0.181986) | 3.959945 / 2.077655 (1.882290) | 1.794059 / 1.504120 (0.289939) | 1.606814 / 1.541195 (0.065619) | 1.674681 / 1.468490 (0.206191) | 0.680130 / 4.584777 (-3.904646) | 3.742730 / 3.745712 (-0.002982) | 2.021793 / 5.269862 (-3.248069) | 1.322726 / 4.565676 (-3.242950) | 0.084519 / 0.424275 (-0.339756) | 0.012012 / 0.007607 (0.004405) | 0.510076 / 0.226044 (0.284032) | 5.084163 / 2.268929 (2.815234) | 2.241032 / 55.444624 (-53.203592) | 1.911936 / 6.876477 (-4.964540) | 1.947992 / 2.142072 (-0.194080) | 0.838779 / 4.805227 (-3.966448) | 0.165103 / 6.500664 (-6.335561) | 0.060722 / 0.075469 (-0.014747) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180274 / 1.841788 (-0.661514) | 14.285364 / 8.074308 (6.211056) | 12.941205 / 10.191392 (2.749813) | 0.153815 / 0.680424 (-0.526609) | 0.028554 / 0.534201 (-0.505647) | 0.441551 / 0.579283 (-0.137732) | 0.434906 / 0.434364 (0.000542) | 0.516120 / 0.540337 (-0.024217) | 0.603062 / 1.386936 (-0.783874) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007287 / 0.011353 (-0.004066) | 0.004998 / 0.011008 (-0.006010) | 0.074997 / 0.038508 (0.036489) | 0.033209 / 0.023109 (0.010100) | 0.336836 / 0.275898 (0.060938) | 0.365562 / 0.323480 (0.042082) | 0.005739 / 0.007986 (-0.002246) | 0.003942 / 0.004328 (-0.000387) | 0.074681 / 0.004250 (0.070430) | 0.049530 / 0.037052 (0.012478) | 0.335642 / 0.258489 (0.077153) | 0.388874 / 0.293841 (0.095033) | 0.037198 / 0.128546 (-0.091349) | 0.011983 / 0.075646 (-0.063664) | 0.087601 / 0.419271 (-0.331671) | 0.053761 / 0.043533 (0.010228) | 0.334142 / 0.255139 (0.079003) | 0.351348 / 0.283200 (0.068148) | 0.107462 / 0.141683 (-0.034221) | 1.497015 / 1.452155 (0.044860) | 1.608287 / 1.492716 (0.115571) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255395 / 0.018006 (0.237389) | 0.439141 / 0.000490 (0.438651) | 0.021391 / 0.000200 (0.021191) | 0.000230 / 0.000054 (0.000176) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028331 / 0.037411 (-0.009080) | 0.108744 / 0.014526 (0.094218) | 0.118201 / 0.176557 (-0.058355) | 0.189556 / 0.737135 (-0.547579) | 0.123112 / 0.296338 (-0.173226) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431394 / 0.215209 (0.216185) | 4.296121 / 2.077655 (2.218466) | 2.126371 / 1.504120 (0.622251) | 1.978178 / 1.541195 (0.436983) | 2.082674 / 1.468490 (0.614184) | 0.701789 / 4.584777 (-3.882988) | 3.791495 / 3.745712 (0.045783) | 2.115267 / 5.269862 (-3.154594) | 1.342159 / 4.565676 (-3.223517) | 0.088132 / 0.424275 (-0.336143) | 0.011903 / 0.007607 (0.004295) | 0.528398 / 0.226044 (0.302354) | 5.270077 / 2.268929 (3.001148) | 2.498860 / 55.444624 (-52.945765) | 2.155515 / 6.876477 (-4.720962) | 2.192866 / 2.142072 (0.050793) | 0.859596 / 4.805227 (-3.945631) | 0.170544 / 6.500664 (-6.330120) | 0.063883 / 0.075469 (-0.011587) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.240679 / 1.841788 (-0.601109) | 14.497379 / 8.074308 (6.423071) | 12.881417 / 10.191392 (2.690025) | 0.147295 / 0.680424 (-0.533129) | 0.017465 / 0.534201 (-0.516736) | 0.424695 / 0.579283 (-0.154588) | 0.414929 / 0.434364 (-0.019435) | 0.536079 / 0.540337 (-0.004259) | 0.638245 / 1.386936 (-0.748691) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008806 / 0.011353 (-0.002547) | 0.004712 / 0.011008 (-0.006297) | 0.102383 / 0.038508 (0.063875) | 0.030260 / 0.023109 (0.007151) | 0.330175 / 0.275898 (0.054277) | 0.376816 / 0.323480 (0.053337) | 0.008065 / 0.007986 (0.000079) | 0.003534 / 0.004328 (-0.000794) | 0.078824 / 0.004250 (0.074573) | 0.036704 / 0.037052 (-0.000349) | 0.331848 / 0.258489 (0.073359) | 0.351031 / 0.293841 (0.057190) | 0.033406 / 0.128546 (-0.095140) | 0.011543 / 0.075646 (-0.064103) | 0.322114 / 0.419271 (-0.097157) | 0.041249 / 0.043533 (-0.002284) | 0.309413 / 0.255139 (0.054274) | 0.329156 / 0.283200 (0.045956) | 0.088636 / 0.141683 (-0.053047) | 1.508226 / 1.452155 (0.056071) | 1.557203 / 1.492716 (0.064487) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196696 / 0.018006 (0.178690) | 0.426360 / 0.000490 (0.425870) | 0.001263 / 0.000200 (0.001064) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023747 / 0.037411 (-0.013664) | 0.100756 / 0.014526 (0.086230) | 0.105817 / 0.176557 (-0.070739) | 0.172573 / 0.737135 (-0.564562) | 0.110705 / 0.296338 (-0.185634) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436913 / 0.215209 (0.221704) | 4.365753 / 2.077655 (2.288099) | 2.201346 / 1.504120 (0.697226) | 1.978800 / 1.541195 (0.437605) | 1.951585 / 1.468490 (0.483094) | 0.699208 / 4.584777 (-3.885569) | 3.381492 / 3.745712 (-0.364220) | 2.966174 / 5.269862 (-2.303687) | 1.487521 / 4.565676 (-3.078156) | 0.082673 / 0.424275 (-0.341602) | 0.012436 / 0.007607 (0.004829) | 0.553276 / 0.226044 (0.327232) | 5.554081 / 2.268929 (3.285153) | 2.653286 / 55.444624 (-52.791339) | 2.404788 / 6.876477 (-4.471689) | 2.484610 / 2.142072 (0.342537) | 0.817073 / 4.805227 (-3.988154) | 0.151619 / 6.500664 (-6.349045) | 0.068259 / 0.075469 (-0.007210) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.273481 / 1.841788 (-0.568306) | 13.908825 / 8.074308 (5.834517) | 13.106695 / 10.191392 (2.915303) | 0.139609 / 0.680424 (-0.540815) | 0.028425 / 0.534201 (-0.505776) | 0.395626 / 0.579283 (-0.183657) | 0.405526 / 0.434364 (-0.028838) | 0.465628 / 0.540337 (-0.074709) | 0.542824 / 1.386936 (-0.844112) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006821 / 0.011353 (-0.004532) | 0.004570 / 0.011008 (-0.006438) | 0.076568 / 0.038508 (0.038060) | 0.028109 / 0.023109 (0.004999) | 0.342768 / 0.275898 (0.066870) | 0.390680 / 0.323480 (0.067200) | 0.005056 / 0.007986 (-0.002930) | 0.003359 / 0.004328 (-0.000970) | 0.075835 / 0.004250 (0.071584) | 0.038888 / 0.037052 (0.001836) | 0.343489 / 0.258489 (0.085000) | 0.400766 / 0.293841 (0.106925) | 0.031816 / 0.128546 (-0.096730) | 0.011637 / 0.075646 (-0.064009) | 0.085474 / 0.419271 (-0.333797) | 0.041740 / 0.043533 (-0.001793) | 0.342501 / 0.255139 (0.087362) | 0.377467 / 0.283200 (0.094267) | 0.091532 / 0.141683 (-0.050151) | 1.457368 / 1.452155 (0.005213) | 1.537187 / 1.492716 (0.044471) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187507 / 0.018006 (0.169501) | 0.415706 / 0.000490 (0.415217) | 0.001816 / 0.000200 (0.001616) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026251 / 0.037411 (-0.011161) | 0.106609 / 0.014526 (0.092083) | 0.109822 / 0.176557 (-0.066735) | 0.180462 / 0.737135 (-0.556674) | 0.114647 / 0.296338 (-0.181691) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.438804 / 0.215209 (0.223595) | 4.387960 / 2.077655 (2.310306) | 2.056804 / 1.504120 (0.552684) | 1.848584 / 1.541195 (0.307389) | 1.939470 / 1.468490 (0.470980) | 0.702539 / 4.584777 (-3.882238) | 3.419535 / 3.745712 (-0.326177) | 1.933889 / 5.269862 (-3.335973) | 1.189631 / 4.565676 (-3.376045) | 0.084105 / 0.424275 (-0.340170) | 0.012520 / 0.007607 (0.004913) | 0.538125 / 0.226044 (0.312081) | 5.370000 / 2.268929 (3.101072) | 2.497487 / 55.444624 (-52.947137) | 2.156054 / 6.876477 (-4.720423) | 2.225909 / 2.142072 (0.083837) | 0.811456 / 4.805227 (-3.993771) | 0.151461 / 6.500664 (-6.349203) | 0.066940 / 0.075469 (-0.008530) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.301246 / 1.841788 (-0.540542) | 14.459755 / 8.074308 (6.385447) | 13.147151 / 10.191392 (2.955759) | 0.129236 / 0.680424 (-0.551188) | 0.016427 / 0.534201 (-0.517774) | 0.380047 / 0.579283 (-0.199236) | 0.392217 / 0.434364 (-0.042147) | 0.470338 / 0.540337 (-0.069999) | 0.559800 / 1.386936 (-0.827136) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/673 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/673/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/673/comments | https://api.github.com/repos/huggingface/datasets/issues/673/events | https://github.com/huggingface/datasets/issues/673 | 709,603,989 | MDU6SXNzdWU3MDk2MDM5ODk= | 673 | blog_authorship_corpus crashed | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | 1 | 2020-09-26T20:15:28Z | 2022-02-15T10:47:58Z | 2022-02-15T10:47:58Z | null | This is just to report that When I pick blog_authorship_corpus in
https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus
I get this:

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/673/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/673/timeline | null | completed | null | null | false | [
"Thanks for reporting !\r\nWe'll free some memory"
] |
https://api.github.com/repos/huggingface/datasets/issues/3771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3771/comments | https://api.github.com/repos/huggingface/datasets/issues/3771/events | https://github.com/huggingface/datasets/pull/3771 | 1,146,561,140 | PR_kwDODunzps4zRHsd | 3,771 | Fix DuplicatedKeysError on msr_sqa dataset | [] | closed | false | null | 0 | 2022-02-22T07:44:24Z | 2022-02-22T08:12:40Z | 2022-02-22T08:12:39Z | null | Fix #3770. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3771/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3771/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3771.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3771",
"merged_at": "2022-02-22T08:12:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3771.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3771"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5656/comments | https://api.github.com/repos/huggingface/datasets/issues/5656/events | https://github.com/huggingface/datasets/pull/5656 | 1,634,156,563 | PR_kwDODunzps5Mjxoo | 5,656 | Fix `fsspec.open` when using an HTTP proxy | [] | closed | false | null | 2 | 2023-03-21T15:23:29Z | 2023-03-23T14:14:50Z | 2023-03-23T13:15:46Z | null | Most HTTP(S) downloads from this library support proxy automatically by reading the `HTTP_PROXY` environment variable (et al.) because `requests` is widely used. However, in some parts of the code, `fsspec` is used, which in turn uses `aiohttp` for HTTP(S) requests (as opposed to `requests`), which in turn doesn't support reading proxy env variables by default. This PR enables reading them automatically.
Read [aiohttp docs on using proxies](https://docs.aiohttp.org/en/stable/client_advanced.html?highlight=trust_env#proxy-support).
For context, [the Python library requests](https://requests.readthedocs.io/en/latest/user/advanced/?highlight=http_proxy#proxies) and [the official Python library via `urllib.urlopen` support this automatically by default](https://docs.python.org/3/library/urllib.request.html#urllib.request.urlopen). Many (most common ones?) programs also do the same, including cURL, APT, Wget, and many others. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5656/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5656/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5656.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5656",
"merged_at": "2023-03-23T13:15:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5656.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5656"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007980 / 0.011353 (-0.003373) | 0.005351 / 0.011008 (-0.005657) | 0.096325 / 0.038508 (0.057817) | 0.034204 / 0.023109 (0.011095) | 0.328080 / 0.275898 (0.052182) | 0.361519 / 0.323480 (0.038039) | 0.005954 / 0.007986 (-0.002032) | 0.004106 / 0.004328 (-0.000222) | 0.072827 / 0.004250 (0.068576) | 0.050522 / 0.037052 (0.013470) | 0.326975 / 0.258489 (0.068486) | 0.373180 / 0.293841 (0.079339) | 0.037024 / 0.128546 (-0.091522) | 0.012347 / 0.075646 (-0.063299) | 0.332341 / 0.419271 (-0.086931) | 0.050695 / 0.043533 (0.007162) | 0.328298 / 0.255139 (0.073159) | 0.352808 / 0.283200 (0.069608) | 0.101637 / 0.141683 (-0.040046) | 1.435172 / 1.452155 (-0.016982) | 1.529797 / 1.492716 (0.037080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305727 / 0.018006 (0.287721) | 0.583951 / 0.000490 (0.583462) | 0.011699 / 0.000200 (0.011499) | 0.000345 / 0.000054 (0.000290) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027917 / 0.037411 (-0.009495) | 0.107698 / 0.014526 (0.093173) | 0.120572 / 0.176557 (-0.055985) | 0.176066 / 0.737135 (-0.561069) | 0.125348 / 0.296338 (-0.170991) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411980 / 0.215209 (0.196771) | 4.113135 / 2.077655 (2.035480) | 1.868725 / 1.504120 (0.364605) | 1.677422 / 1.541195 (0.136227) | 1.796759 / 1.468490 (0.328269) | 0.701957 / 4.584777 (-3.882820) | 3.830742 / 3.745712 (0.085030) | 2.170444 / 5.269862 (-3.099418) | 1.345097 / 4.565676 (-3.220580) | 0.086661 / 0.424275 (-0.337614) | 0.013073 / 0.007607 (0.005466) | 0.519150 / 0.226044 (0.293106) | 5.193447 / 2.268929 (2.924518) | 2.391155 / 55.444624 (-53.053470) | 2.076610 / 6.876477 (-4.799867) | 2.245557 / 2.142072 (0.103484) | 0.846496 / 4.805227 (-3.958731) | 0.169246 / 6.500664 (-6.331418) | 0.066360 / 0.075469 (-0.009109) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196344 / 1.841788 (-0.645444) | 15.640363 / 8.074308 (7.566055) | 14.936144 / 10.191392 (4.744752) | 0.163613 / 0.680424 (-0.516811) | 0.017900 / 0.534201 (-0.516301) | 0.425377 / 0.579283 (-0.153906) | 0.431119 / 0.434364 (-0.003245) | 0.513669 / 0.540337 (-0.026669) | 0.592970 / 1.386936 (-0.793966) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007958 / 0.011353 (-0.003395) | 0.005707 / 0.011008 (-0.005301) | 0.075377 / 0.038508 (0.036869) | 0.037126 / 0.023109 (0.014016) | 0.344589 / 0.275898 (0.068691) | 0.381060 / 0.323480 (0.057580) | 0.006592 / 0.007986 (-0.001393) | 0.004479 / 0.004328 (0.000151) | 0.074456 / 0.004250 (0.070206) | 0.054087 / 0.037052 (0.017035) | 0.344942 / 0.258489 (0.086453) | 0.393174 / 0.293841 (0.099333) | 0.037926 / 0.128546 (-0.090620) | 0.012638 / 0.075646 (-0.063009) | 0.087743 / 0.419271 (-0.331529) | 0.050081 / 0.043533 (0.006548) | 0.340406 / 0.255139 (0.085267) | 0.361487 / 0.283200 (0.078287) | 0.108546 / 0.141683 (-0.033137) | 1.424626 / 1.452155 (-0.027529) | 1.553958 / 1.492716 (0.061242) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.329922 / 0.018006 (0.311916) | 0.523239 / 0.000490 (0.522749) | 0.012164 / 0.000200 (0.011964) | 0.000137 / 0.000054 (0.000082) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031935 / 0.037411 (-0.005477) | 0.115680 / 0.014526 (0.101154) | 0.130062 / 0.176557 (-0.046494) | 0.180679 / 0.737135 (-0.556457) | 0.135548 / 0.296338 (-0.160790) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.429648 / 0.215209 (0.214439) | 4.303342 / 2.077655 (2.225687) | 1.999395 / 1.504120 (0.495275) | 1.810354 / 1.541195 (0.269160) | 1.963132 / 1.468490 (0.494642) | 0.701654 / 4.584777 (-3.883122) | 3.844687 / 3.745712 (0.098975) | 2.153425 / 5.269862 (-3.116436) | 1.351541 / 4.565676 (-3.214135) | 0.086292 / 0.424275 (-0.337983) | 0.012491 / 0.007607 (0.004883) | 0.523144 / 0.226044 (0.297099) | 5.243283 / 2.268929 (2.974355) | 2.465849 / 55.444624 (-52.978775) | 2.154505 / 6.876477 (-4.721972) | 2.245500 / 2.142072 (0.103428) | 0.838902 / 4.805227 (-3.966326) | 0.169441 / 6.500664 (-6.331223) | 0.065631 / 0.075469 (-0.009838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262175 / 1.841788 (-0.579612) | 15.424650 / 8.074308 (7.350342) | 15.000718 / 10.191392 (4.809326) | 0.186328 / 0.680424 (-0.494096) | 0.018076 / 0.534201 (-0.516125) | 0.433458 / 0.579283 (-0.145825) | 0.424213 / 0.434364 (-0.010151) | 0.546568 / 0.540337 (0.006231) | 0.643529 / 1.386936 (-0.743407) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4346/comments | https://api.github.com/repos/huggingface/datasets/issues/4346/events | https://github.com/huggingface/datasets/issues/4346 | 1,235,067,062 | I_kwDODunzps5JnaC2 | 4,346 | GH Action to build documentation never ends | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-05-13T10:44:44Z | 2022-05-13T11:22:00Z | 2022-05-13T11:22:00Z | null | ## Describe the bug
See: https://github.com/huggingface/datasets/runs/6418035586?check_suite_focus=true
I finally forced the cancel of the workflow. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4346/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4346/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5950/comments | https://api.github.com/repos/huggingface/datasets/issues/5950/events | https://github.com/huggingface/datasets/issues/5950 | 1,755,197,946 | I_kwDODunzps5onjH6 | 5,950 | Support for data with instance-wise dictionary as features | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2023-06-13T15:49:00Z | 2023-06-14T12:13:38Z | null | null | ### Feature request
I notice that when loading data instances with feature type of python dictionary, the dictionary keys would be broadcast so that every instance has the same set of keys. Please see an example in the Motivation section.
It is possible to avoid this behavior, i.e., load dictionary features as it is and do not broadcast the keys among instances? Please note that these dictionaries would have to be processed dynamically at each training iteration into strings (and tokenized).
### Motivation
I am trying to load a dataset from a json file. Each instance of the dataset has a feature that is a dictionary but its keys depend on the instance. Every two instances may have different keys. For example, imagine a dataset that contains a set of math expressions from a bunch of mutually redundant expressions:
```
{
"index": 0,
"feature": {
"2 * x + y >= 3": ["2 * x + y >= 3", "4 * x + 2 * y >= 6"],
...
}
},
...
{
"index": 9999,
"feature": {
"x >= 6": ["x >= 6", "x >= 0", "x >= -1"],
...
}
},
...
```
When directly loading the dataset using `data = load_dataset("json", data_files=file_paths, split='train')`, each instance would have all the keys from other instances and None as values. That is, instance of index 0 becomes:
```
{
"index": 0,
"feature": {
"2 * x + y >= 3": ["2 * x + y >= 3", "4 * x + 2 * y >= 6"],
...
"x >= 6": None, # keys from other instances
...
}
},
```
This is not desirable. Moreover, issue would be raised if I attempt to combine two such datasets using `data = concatenate_datasets(multi_datasets)`, perhaps because their dictionary features contain different keys.
A solution I can think of is to store the dictionary features as a long string, and evaluate it later. Please kindly suggest any other solution using existing methods of datasets.
### Your contribution
N/A | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5950/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5950/timeline | null | null | null | null | false | [
"Hi ! We use the Arrow columnar format under the hood, which doesn't support such dictionaries: each field must have a fixed type and exist in each sample.\r\n\r\nInstead you can restructure your data like\r\n```\r\n{\r\n \"index\": 0,\r\n \"keys\": [\"2 * x + y >= 3\"],\r\n \"values\": [[\"2 * x + y >= 3\", \"4 * x + 2 * y >= 6\"]],\r\n }\r\n},\r\n...\r\n{\r\n \"index\": 9999,\r\n \"keys\": [\"x >= 6\"],\r\n \"values\": [[\"x >= 6\", \"x >= 0\", \"x >= -1\"]],\r\n},\r\n...\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/1116 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1116/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1116/comments | https://api.github.com/repos/huggingface/datasets/issues/1116/events | https://github.com/huggingface/datasets/pull/1116 | 757,133,502 | MDExOlB1bGxSZXF1ZXN0NTMyNTYwNDk4 | 1,116 | add dbpedia_14 dataset | [] | closed | false | null | 5 | 2020-12-04T14:13:59Z | 2020-12-07T10:06:54Z | 2020-12-05T15:36:23Z | null | This dataset corresponds to the DBpedia dataset requested in https://github.com/huggingface/datasets/issues/353. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1116/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1116/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1116.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1116",
"merged_at": "2020-12-05T15:36:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1116.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1116"
} | true | [
"Thanks for the review. \r\nCheers!",
"Hi @hfawaz, this week we are doing the 🤗 `datasets` sprint (see some details [here](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176)).\r\n\r\nNothing more to do on your side but it means that if you register on the thread I linked above, you can have some goodies for the present dataset that you have already added (and a special goodie if you want to spend more time and add 2 other datasets as well).\r\n\r\nIf you want to join, just tell me (or post on the thread on the HuggingFace forum: https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176)",
"Hello @thomwolf \r\nThanks for the feedback and for this invitation, indeed I would be glad to join you guys (you can add me). \r\nI will see if I have the time to implement a couple of datasets. \r\nCheers! ",
"@hfawaz invited you to the slack with your uha email.\r\n\r\nCheck your spam folder if you can't find the invitation :)",
"Oh thanks, but can you invite me on my gmail: [email protected] \r\nUHA is my old organization, I haven't had the time to update my online profiles yet.\r\nThank you "
] |
https://api.github.com/repos/huggingface/datasets/issues/1560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1560/comments | https://api.github.com/repos/huggingface/datasets/issues/1560/events | https://github.com/huggingface/datasets/pull/1560 | 765,814,964 | MDExOlB1bGxSZXF1ZXN0NTM5MDkzMzky | 1,560 | Adding the BrWaC dataset | [] | closed | false | null | 0 | 2020-12-14T03:03:56Z | 2020-12-18T15:56:56Z | 2020-12-18T15:56:55Z | null | Adding the BrWaC dataset, a large corpus of Portuguese language texts | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1560/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1560/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1560.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1560",
"merged_at": "2020-12-18T15:56:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1560.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1560"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5213/comments | https://api.github.com/repos/huggingface/datasets/issues/5213/events | https://github.com/huggingface/datasets/pull/5213 | 1,440,037,534 | PR_kwDODunzps5CalQ_ | 5,213 | Add support for different configs with `push_to_hub` | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 8 | 2022-11-08T11:45:47Z | 2022-12-02T16:48:23Z | 2022-12-02T16:44:07Z | null | will solve #5151
@lhoestq @albertvillanova @mariosasko
This is still a super draft so please ignore code issues but I want to discuss some conceptually important things.
I suggest a way to do `.push_to_hub("repo_id", "config_name")` with pushing parquet files to directories named as `config_name` (inside `data/` dir as it is now), for example:
```
data
|__config-v1
train-00000-00002-...-.parquet
train-00001-00002-...-.parquet
...
|__config-v2
....
```
When loading a dataset, I parse these configs from repository data files (only for `"data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]*.*"` pattern that is used for parquet datasets pushed with `.push_to_hub`).
Therefore,
- when user tries to load a dataset that has configs parsed from data files dir names without providing a config (like `load_dataset("repo")` instead of `load_dataset("repo", "config-v1")`) - raise error and asks for config - to be aligned with how it works in datasets with scripts.
- for backward compatibility: if user tries to `.push_to_hub(""repo", "config_name")` to an existing parquet repo with no configurations (all parquet files are directly in `data/` dir) - raise error. My initial idea was to raise a warning and move these files to another dir with name (config) like "default" or smth but in a PR and suggest user to merge it on the Hub. But there is no support for renaming (moving) files via `HfApi` yet so it would require deleting and pushing again if I understand it right.
This parsing approach can be extended to other Hub packaged modules, and to local packaged modules and other data files patterns
(except for cases when splits are in dir names `KEYWORDS_IN_DIR_NAME_BASE_PATTERNS` because we allow for arbitrary depth of directory hierarchy).
Do you think it's reasonable? Not sure how to provide flexibility (and backward compatibility) to not parsing configs and load all the data in a single config as it is now.
I also thought about getting information about configs from Readme.md `dataset_info` ([example](https://huggingface.co/datasets/polinaeterna/test_push_two_configs/blob/main/README.md)). But that way we
are dependent on if it exists. It is created automatically with `.push_to_hub` but what if it is
accidentally deleted or smth).
Also, what I don't like is that this parsing is a part of Module/DataFiles logic, not Builder's one, which is not aligned with datasets with custom scripts. But I don't know to implement the second approach in current library's logic.
What do you think about this all? Am I missing smth?
TODO:
- [ ] save cache in the same dir for configs of the same datasets
- [ ] fix verification errors
- [ ] correctly update `dataset_infos.json` too
- [ ] ...
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5213/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5213/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5213",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5213"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5213). All of your documentation changes will be reflected on that endpoint.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5213). All of your documentation changes will be reflected on that endpoint.",
"Nice thanks !\r\n\r\nWould it be possible to have the new folders at the same level as \"data\" ? This way they're all separated\r\n```\r\n├─ config-v1/\r\n│ ├── train-00000-00002-...-.parquet\r\n│ └── train-00001-00002-...-.parquet\r\n└ config-v2/\r\n ├── train-00000-00002-...-.parquet\r\n └── train-00001-00002-...-.parquet\r\n```\r\nand if you don't provide a config name, it goes in a folder named \"default\" instead, that would be loaded by default.\r\n\r\nWe could also write in the YAML something like\r\n```yaml\r\nconfigs:\r\n- name: config-v1\r\n data_dir: config-v1\r\n- name: config-v2\r\n data_dir: config-v2\r\n```\r\nand loading `config-v1` would be equivalent to run `load_dataset(ds_name, \"config-v1\", data_dir=\"config-v1\")`\r\n\r\nDo you think it would make sense ?\r\n\r\nFor backward compatibility we can just keep the \"data/*\" pattern. It's ok to expect users to have an updated version of `datasets` to be able to load datasets with configurations.",
"@lhoestq thank you for the feedback! i'll reflect on this on Moday, my mind just melted because of the fever.\r\n\r\n@mariosasko @albertvillanova what do you think?",
"Thanks for addressing this, @polinaeterna. It is good:\r\n- we support configs for datasets without scripts\r\n- we align the behavior to datasets with scripts as much as possible\r\n\r\nMaybe adding some tests will help clarify what is the expected behavior...",
"After some discussion with @lhoestq we decided that it's better to rely on metadata file than on data files patterns. \r\n\r\nSo we decided to introduce a new field to yaml (like `configs` or smth like that) that would contain arbitrary configs kwargs to be passed to loader, including `data_dir` and `data_files`. \r\nThis is more aligned with datasets with custom scripts where we explicitly write all the supported configs and config parameters in the code and is extendable to all packaged modules.\r\nThis would solve https://github.com/huggingface/datasets/issues/5209\r\n\r\n(@lhoestq was right 21 days ago, this is a more general solution idk why i ignored this...)",
"closed in favor of https://github.com/huggingface/datasets/pull/5331"
] |
https://api.github.com/repos/huggingface/datasets/issues/2242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2242/comments | https://api.github.com/repos/huggingface/datasets/issues/2242/events | https://github.com/huggingface/datasets/issues/2242 | 862,870,205 | MDU6SXNzdWU4NjI4NzAyMDU= | 2,242 | Link to datasets viwer on Quick Tour page returns "502 Bad Gateway" | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-04-20T14:19:51Z | 2021-04-20T15:02:45Z | 2021-04-20T15:02:45Z | null | Link to datasets viwer (https://huggingface.co/datasets/viewer/) on Quick Tour page (https://huggingface.co/docs/datasets/quicktour.html) returns "502 Bad Gateway"
The same error with https://huggingface.co/datasets/viewer/?dataset=glue&config=mrpc | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2242/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2242/timeline | null | completed | null | null | false | [
"This should be fixed now!\r\n\r\ncc @srush "
] |
https://api.github.com/repos/huggingface/datasets/issues/5997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5997/comments | https://api.github.com/repos/huggingface/datasets/issues/5997/events | https://github.com/huggingface/datasets/issues/5997 | 1,781,582,818 | I_kwDODunzps5qMMvi | 5,997 | extend the map function so it can wrap around long text that does not fit in the context window | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 2 | 2023-06-29T22:15:21Z | 2023-07-03T17:58:52Z | null | null | ### Feature request
I understand `dataset` provides a [`map`](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2849) function. This function in turn takes in a callable that is used to tokenize the text on which a model is trained. Frequently this text will not fit within a models's context window. In this case it would be useful to wrap around the text into multiple rows with each row fitting the model's context window. I tried to do it using this code as example which in turn I have borrowed from [here](https://stackoverflow.com/a/76343993/147530):
```
data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)
```
but running the code gives me this error:
```
File "/llm/fine-tune.py", line 117, in <module>
data = data.map(lambda samples: tokenizer(samples["text"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 580, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 545, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3087, in map
for rank, done, content in Dataset._map_single(**dataset_kwargs):
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3480, in _map_single
writer.write_batch(batch)
File "/llm/.env/lib/python3.9/site-packages/datasets/arrow_writer.py", line 556, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
File "pyarrow/table.pxi", line 3798, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 2962, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 394 but got length 447
```
The lambda function I have provided is correctly chopping up long text so it wraps around (and because of this 394 samples become 447 after wrap around) but the dataset `map` function does not like it.
### Motivation
please see above
### Your contribution
I'm afraid I don't have much knowledge to help | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5997/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5997/timeline | null | null | null | null | false | [
"I just noticed the [docs](https://github.com/huggingface/datasets/blob/main/src/datasets/arrow_dataset.py#L2881C11-L2881C200) say:\r\n\r\n>If batched is `True` and `batch_size` is `n > 1`, then the function takes a batch of `n` examples as input and can return a batch with `n` examples, or with an arbitrary number of examples.\r\n\r\nso maybe this is a bug then.",
"All the values in a batch must be of the same length. So one solution is dropping all the input columns:\r\n```python\r\ndata = data.map(lambda samples: tokenizer(samples[\"text\"], max_length=tokenizer.model_max_length, truncation=True, stride=4, return_overflowing_tokens=True), batched=True, remove_columns=data.column_names)\r\n```\r\n\r\nAnother is padding/transforming the input columns to the tokenizer output's length (447). "
] |
https://api.github.com/repos/huggingface/datasets/issues/3217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3217/comments | https://api.github.com/repos/huggingface/datasets/issues/3217/events | https://github.com/huggingface/datasets/issues/3217 | 1,045,029,710 | I_kwDODunzps4-SeNO | 3,217 | Fix code quality bug in riddle_sense dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-11-04T17:40:32Z | 2021-11-04T17:50:02Z | 2021-11-04T17:50:02Z | null | ## Describe the bug
```
datasets/riddle_sense/riddle_sense.py:36:21: W291 trailing whitespace
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3217/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3217/timeline | null | completed | null | null | false | [
"To give more context: https://github.com/psf/black/issues/318. `black` doesn't treat this as a bug, but `flake8` does. \r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2802/comments | https://api.github.com/repos/huggingface/datasets/issues/2802/events | https://github.com/huggingface/datasets/pull/2802 | 970,848,302 | MDExOlB1bGxSZXF1ZXN0NzEyNzM0MTc3 | 2,802 | add openwebtext2 | [] | closed | false | null | 3 | 2021-08-14T07:09:03Z | 2021-08-23T14:06:14Z | 2021-08-23T14:06:14Z | null | openwebtext2 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components.
When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797
Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2802/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2802/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2802.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2802",
"merged_at": "2021-08-23T14:06:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2802.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2802"
} | true | [
"It seems we need to `pip install jsonlines` to pass the checks ?",
"Hi ! Do you really need `jsonlines` ? I think it simply uses `json.loads` under the hood.\r\n\r\nCurrently the test are failing because `jsonlines` is not part of the extra requirements `TESTS_REQUIRE` in setup.py\r\n\r\nSo either you can replace `jsonlines` with a simple for loop on the lines of the files and use `json.loads`, or you can add `TESTS_REQUIRE` to the test requirements (but in this case users will have to install it as well).",
"Thanks for your suggestion. I now know `io` and json lines format better and has changed `jsonlines` to just `readlines`."
] |
https://api.github.com/repos/huggingface/datasets/issues/3949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3949/comments | https://api.github.com/repos/huggingface/datasets/issues/3949/events | https://github.com/huggingface/datasets/pull/3949 | 1,171,467,981 | PR_kwDODunzps40jia- | 3,949 | Remove GLEU metric | [] | closed | false | null | 1 | 2022-03-16T19:35:31Z | 2022-04-12T20:43:26Z | 2022-04-12T20:37:09Z | null | Remove the GLEU metric as it is not actually implemented. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 1,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3949/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3949/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3949.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3949",
"merged_at": "2022-04-12T20:37:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3949.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3949"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3406/comments | https://api.github.com/repos/huggingface/datasets/issues/3406/events | https://github.com/huggingface/datasets/pull/3406 | 1,074,366,050 | PR_kwDODunzps4vjV21 | 3,406 | Fix module inference for archive with a directory | [] | closed | false | null | 0 | 2021-12-08T12:39:12Z | 2021-12-08T13:03:30Z | 2021-12-08T13:03:29Z | null | Fix module inference for an archive file that contains files within a directory.
Fix #3405. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3406/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3406/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3406.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3406",
"merged_at": "2021-12-08T13:03:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3406.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3406"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4730/comments | https://api.github.com/repos/huggingface/datasets/issues/4730/events | https://github.com/huggingface/datasets/issues/4730 | 1,313,421,263 | I_kwDODunzps5OSTfP | 4,730 | Loading imagenet-1k validation split takes much more RAM than expected | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-07-21T15:14:06Z | 2022-07-21T16:41:04Z | 2022-07-21T16:41:04Z | null | ## Describe the bug
Loading into memory the validation split of imagenet-1k takes much more RAM than expected. Assuming ImageNet-1k is 150 GB, split is 50000 validation images and 1,281,167 train images, I would expect only about 6 GB loaded in RAM.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("imagenet-1k", split="validation")
print(dataset)
"""prints
Dataset({
features: ['image', 'label'],
num_rows: 50000
})
"""
pipe_inputs = dataset["image"]
# and wait :-)
```
## Expected results
Use only < 10 GB RAM when loading the images.
## Actual results

```
Using custom data configuration default
Reusing dataset imagenet-1k (/home/fxmarty/.cache/huggingface/datasets/imagenet-1k/default/1.0.0/a1e9bfc56c3a7350165007d1176b15e9128fcaf9ab972147840529aed3ae52bc)
Killed
```
## Environment info
- `datasets` version: 2.3.3.dev0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- datasets commit: 4e4222f1b6362c2788aec0dd2cd8cede6dd17b80
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4730/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4730/timeline | null | completed | null | null | false | [
"My bad, `482 * 418 * 50000 * 3 / 1000000 = 30221 MB` ( https://stackoverflow.com/a/42979315 ).\r\n\r\nMeanwhile `256 * 256 * 50000 * 3 / 1000000 = 9830 MB`. We are loading the non-cropped images and that is why we take so much RAM."
] |
https://api.github.com/repos/huggingface/datasets/issues/5227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5227/comments | https://api.github.com/repos/huggingface/datasets/issues/5227/events | https://github.com/huggingface/datasets/issues/5227 | 1,444,620,094 | I_kwDODunzps5WGyc- | 5,227 | datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files | [] | closed | false | null | 1 | 2022-11-10T21:57:06Z | 2022-11-10T22:05:43Z | 2022-11-10T22:05:43Z | null | ### Describe the bug
From these lines:
from datasets import list_datasets, load_dataset
dataset = load_dataset("wikisql","binary")
I get error message:
datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files
And yet the 'wikisql' is reported to exist via the list_datasets().
Any help appreciated.
### Steps to reproduce the bug
From these lines:
from datasets import list_datasets, load_dataset
dataset = load_dataset("wikisql","binary")
I get error message:
datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files
And yet the 'wikisql' is reported to exist via the list_datasets().
Any help appreciated.
### Expected behavior
Dataset should load. This same code used to work.
### Environment info
Mac OS | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5227/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5227/timeline | null | completed | null | null | false | [
"Fixed. Please close."
] |
https://api.github.com/repos/huggingface/datasets/issues/3871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3871/comments | https://api.github.com/repos/huggingface/datasets/issues/3871/events | https://github.com/huggingface/datasets/pull/3871 | 1,163,714,113 | PR_kwDODunzps40KRcM | 3,871 | add pandas to env command | [] | closed | false | null | 2 | 2022-03-09T09:48:51Z | 2022-03-09T11:21:38Z | 2022-03-09T11:21:37Z | null | Pandas is a required packages and used quite a bit. I don't see any downside with adding its version to the `datasets-cli env` command. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3871/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3871/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3871.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3871",
"merged_at": "2022-03-09T11:21:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3871.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3871"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3871). All of your documentation changes will be reflected on that endpoint.",
"Think failures are unrelated - feel free to merge whenever you want :-)"
] |
https://api.github.com/repos/huggingface/datasets/issues/92 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/92/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/92/comments | https://api.github.com/repos/huggingface/datasets/issues/92/events | https://github.com/huggingface/datasets/pull/92 | 617,341,505 | MDExOlB1bGxSZXF1ZXN0NDE3Mjc1ODky | 92 | [WIP] add wmt14 | [] | closed | false | null | 0 | 2020-05-13T10:42:03Z | 2020-05-16T11:17:38Z | 2020-05-16T11:17:37Z | null | WMT14 takes forever to download :-/
- WMT is the first dataset that uses an abstract class IMO, so I had to modify the `load_dataset_module` a bit. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/92/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/92/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/92.diff",
"html_url": "https://github.com/huggingface/datasets/pull/92",
"merged_at": "2020-05-16T11:17:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/92.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/92"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4197/comments | https://api.github.com/repos/huggingface/datasets/issues/4197/events | https://github.com/huggingface/datasets/pull/4197 | 1,211,342,558 | PR_kwDODunzps42kyXD | 4,197 | Add remove_columns=True | [] | closed | false | null | 4 | 2022-04-21T17:28:13Z | 2022-04-22T14:51:41Z | 2022-04-22T14:45:30Z | null | This should fix all the issue we have with in place operations in mapping functions. This is crucial as where we do some weird things like:
```
def apply(batch):
batch_size = len(batch["id"])
batch["text"] = ["potato" for _ range(batch_size)]
return {}
# Columns are: {"id": int}
dset.map(apply, batched=True, remove_columns="text") # crashes because `text` is not in the original columns
dset.map(apply, batched=True) # mapped datasets has `text` column
```
In this PR we suggest to have `remove_columns=True` so that we ignore the input completely, and just use the output to generate mapped dataset. This means that inplace operations won't have any effects anymore. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4197/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4197/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4197.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4197",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4197.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4197"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Any reason why we can't just do `[inputs.copy()]` in this line for in-place operations to not have effects anymore:\r\nhttps://github.com/huggingface/datasets/blob/bf432011ff9155a5bc16c03956bc63e514baf80d/src/datasets/arrow_dataset.py#L2232.\r\n\r\n(in the `batched` case, we can also copy the inputs' values (list objects) to ignore in-place modifications to the inputs' columns)\r\n\r\nI think `remove_columns=True` has no meaning, so I'm not a fan of this change.",
"@mariosasko copy does have a cost associated with it ... and plus you'll have to consider `deepcopy` Imagine columnds that are list of list of list of list .... Though I have to agree that `remove_columns=True` doesn't make sense (but, IMO, neither does it in its current use-case as it should refer to `input_columns`) ",
"Okay closing this PR for the following reasons:\r\n - `remove_columns=True` was expected to keep the `.update`-like operator for `.map`. I initially thought it would be a good way to ignore function side effects and only keep output of that function (cf. PR description).\r\n - expected `remove_columns=True` is a bad API according to @mariosasko and introduces unecessary changes for little gain (strictly equivalent to `remove_columns=dset.column_names`)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3235/comments | https://api.github.com/repos/huggingface/datasets/issues/3235/events | https://github.com/huggingface/datasets/pull/3235 | 1,047,808,263 | PR_kwDODunzps4uPr9Z | 3,235 | Addd options to use updated bleurt checkpoints | [] | closed | false | null | 0 | 2021-11-08T18:53:54Z | 2021-11-12T14:05:28Z | 2021-11-12T14:05:28Z | null | Adds options to use newer recommended checkpoint (as of 2021/10/8) bleurt-20 and its distilled versions.
Updated checkpoints are described in https://github.com/google-research/bleurt/blob/master/checkpoints.md#the-recommended-checkpoint-bleurt-20
This change won't affect the default behavior of metrics/bleurt. It only adds option to load newer checkpoints as
`datasets.load_metric('bleurt', 'bleurt-20')`
`bluert-20` generates scores roughly between 0 and 1, which wasn't the case for the previous checkpoints. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3235/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3235/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3235.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3235",
"merged_at": "2021-11-12T14:05:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3235.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3235"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1376/comments | https://api.github.com/repos/huggingface/datasets/issues/1376/events | https://github.com/huggingface/datasets/pull/1376 | 760,309,300 | MDExOlB1bGxSZXF1ZXN0NTM1MTYyODU4 | 1,376 | Add SETimes Dataset | [] | closed | false | null | 1 | 2020-12-09T13:01:08Z | 2020-12-10T16:11:57Z | 2020-12-10T16:11:56Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1376/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1376/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1376.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1376",
"merged_at": "2020-12-10T16:11:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1376.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1376"
} | true | [
"merging since the CI is fixed on master"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/4927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4927/comments | https://api.github.com/repos/huggingface/datasets/issues/4927/events | https://github.com/huggingface/datasets/pull/4927 | 1,360,428,139 | PR_kwDODunzps4-S0we | 4,927 | fix BLEU metric card | [] | closed | false | null | 0 | 2022-09-02T17:00:56Z | 2022-09-09T16:28:15Z | 2022-09-09T16:28:15Z | null | I've fixed some typos in BLEU metric card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4927/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4927/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4927.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4927",
"merged_at": "2022-09-09T16:28:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4927.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4927"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4002/comments | https://api.github.com/repos/huggingface/datasets/issues/4002/events | https://github.com/huggingface/datasets/pull/4002 | 1,179,263,787 | PR_kwDODunzps408Cfp | 4,002 | Support streaming conll2012_ontonotesv5 dataset | [] | closed | false | null | 1 | 2022-03-24T09:49:56Z | 2022-03-24T10:53:41Z | 2022-03-24T10:48:47Z | null | Use another URL whit a single ZIP file (instead of previous one with a ZIP file inside another ZIP file). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4002/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4002/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4002.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4002",
"merged_at": "2022-03-24T10:48:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4002.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4002"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4950/comments | https://api.github.com/repos/huggingface/datasets/issues/4950/events | https://github.com/huggingface/datasets/pull/4950 | 1,365,458,633 | PR_kwDODunzps4-jWZ1 | 4,950 | Update Enwik8 broken link and information | [] | closed | false | null | 1 | 2022-09-08T03:15:00Z | 2022-09-24T22:14:35Z | 2022-09-08T14:51:00Z | null | The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4950/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4950/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4950.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4950",
"merged_at": "2022-09-08T14:51:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4950.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4950"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1046/comments | https://api.github.com/repos/huggingface/datasets/issues/1046/events | https://github.com/huggingface/datasets/issues/1046 | 756,122,709 | MDU6SXNzdWU3NTYxMjI3MDk= | 1,046 | Dataset.map() turns tensors into lists? | [] | closed | false | null | 2 | 2020-12-03T11:43:46Z | 2022-10-05T12:12:41Z | 2022-10-05T12:12:41Z | null | I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists!
```import datasets
import torch
from datasets import load_dataset
print("version datasets", datasets.__version__)
dataset = load_dataset("snli", split='train[0:50]')
def tokenizer_fn(example):
# actually uses a tokenizer which does something like:
return {'input_ids': torch.tensor([[0, 1, 2]])}
print("First item in dataset:\n", dataset[0])
tokenized = tokenizer_fn(dataset[0])
print("Tokenized hyp:\n", tokenized)
dataset_tok = dataset.map(tokenizer_fn, batched=False,
remove_columns=['label', 'premise', 'hypothesis'])
print("Tokenized using map:\n", dataset_tok[0])
print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids']))
dataset_tok = dataset.map(tokenizer_fn, batched=False,
remove_columns=['label', 'premise', 'hypothesis'])
print("Tokenized using map:\n", dataset_tok[0])
print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids']))
```
The output is:
```
version datasets 1.1.3
Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c)
First item in dataset:
{'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1}
Tokenized hyp:
{'input_ids': tensor([[0, 1, 2]])}
Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow
Tokenized using map:
{'input_ids': [[0, 1, 2]]}
<class 'torch.Tensor'> <class 'list'>
```
Or am I doing something wrong?
| {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1046/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1046/timeline | null | completed | null | null | false | [
"A solution is to have the tokenizer return a list instead of a tensor, and then use `dataset_tok.set_format(type = 'torch')` to convert that list into a tensor. Still not sure if bug.",
"It is expected behavior, you should set the format to `\"torch\"` as you mentioned to get pytorch tensors back.\r\nBy default datasets returns pure python objects."
] |
https://api.github.com/repos/huggingface/datasets/issues/1246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1246/comments | https://api.github.com/repos/huggingface/datasets/issues/1246/events | https://github.com/huggingface/datasets/pull/1246 | 758,418,652 | MDExOlB1bGxSZXF1ZXN0NTMzNTk0NjIz | 1,246 | arXiv dataset added | [] | closed | false | null | 0 | 2020-12-07T11:20:23Z | 2020-12-07T14:22:58Z | 2020-12-07T14:22:58Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1246/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1246/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1246.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1246",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1246.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1246"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/5140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5140/comments | https://api.github.com/repos/huggingface/datasets/issues/5140/events | https://github.com/huggingface/datasets/pull/5140 | 1,415,075,530 | PR_kwDODunzps5BHTNq | 5,140 | Make the KeyHasher FIPS compliant | [] | closed | false | null | 0 | 2022-10-19T14:25:52Z | 2022-11-07T16:20:43Z | 2022-11-07T16:20:43Z | null | MD5 is not FIPS compliant thus I am proposing this minimal change to make datasets package FIPS compliant | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5140/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5140/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5140.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5140",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5140.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5140"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2586/comments | https://api.github.com/repos/huggingface/datasets/issues/2586/events | https://github.com/huggingface/datasets/pull/2586 | 936,747,588 | MDExOlB1bGxSZXF1ZXN0NjgzNDEwMDU3 | 2,586 | Fix misalignment in SQuAD | [] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 0 | 2021-07-05T06:42:20Z | 2021-07-12T14:11:10Z | 2021-07-07T13:18:51Z | null | Fix misalignment between:
- the answer text and
- the answer_start within the context
by keeping original leading blank spaces in the context.
Fix #2585. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2586/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2586/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2586.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2586",
"merged_at": "2021-07-07T13:18:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2586.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2586"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4906/comments | https://api.github.com/repos/huggingface/datasets/issues/4906/events | https://github.com/huggingface/datasets/issues/4906 | 1,353,223,925 | I_kwDODunzps5QqI71 | 4,906 | Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import) | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 5 | 2022-08-28T02:23:24Z | 2023-05-13T13:53:30Z | 2022-10-03T12:22:50Z | null | ## Describe the bug
A clear and concise description of what the bug is.
Not able to import datasets
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import os
os.environ["WANDB_API_KEY"] = "0" ## to silence warning
import numpy as np
import random
import sklearn
import matplotlib.pyplot as plt
import pandas as pd
import sys
import tensorflow as tf
import plotly.express as px
import transformers
import tokenizers
import nlp as nlp
import utils
import datasets
```
## Expected results
A clear and concise description of the expected results.
import should work normal
## Actual results
Specify the actual results or traceback.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-21-b3b5b0b62103> in <module>
13 import nlp as nlp
14 import utils
---> 15 import datasets
~\anaconda3\lib\site-packages\datasets\__init__.py in <module>
44 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled
45 from .info import DatasetInfo, MetricInfo
---> 46 from .inspect import (
47 get_dataset_config_info,
48 get_dataset_config_names,
~\anaconda3\lib\site-packages\datasets\inspect.py in <module>
28 from .download.streaming_download_manager import StreamingDownloadManager
29 from .info import DatasetInfo
---> 30 from .load import dataset_module_factory, import_main_class, load_dataset_builder, metric_module_factory
31 from .utils.file_utils import relative_to_absolute_path
32 from .utils.logging import get_logger
~\anaconda3\lib\site-packages\datasets\load.py in <module>
53 from .iterable_dataset import IterableDataset
54 from .metric import Metric
---> 55 from .packaged_modules import (
56 _EXTENSION_TO_MODULE,
57 _MODULE_SUPPORTS_METADATA,
~\anaconda3\lib\site-packages\datasets\packaged_modules\__init__.py in <module>
4 from typing import List
5
----> 6 from .csv import csv
7 from .imagefolder import imagefolder
8 from .json import json
~\anaconda3\lib\site-packages\datasets\packaged_modules\csv\csv.py in <module>
13
14
---> 15 logger = datasets.utils.logging.get_logger(__name__)
16
17 _PANDAS_READ_CSV_NO_DEFAULT_PARAMETERS = ["names", "prefix"]
AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.4.0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.8.8
- PyArrow version: 9.0.0
- Pandas version: 1.2.4
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4906/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4906/timeline | null | completed | null | null | false | [
"Thanks for reporting, @OPterminator.\r\n\r\nHowever, we are not able to reproduce this issue.\r\n\r\nThere might be 2 reasons why you get this exception:\r\n- Either the name of your local Python file: if it is called `datasets.py` this could generate a circular import when trying to import the Hugging Face `datasets` library.\r\n - You could try to rename it and run it again.\r\n- Another cause could be the simultaneous use of the packages `nlp` and `datasets`. Please note that we renamed the Hugging Face `nlp` library to `datasets` more than 2 years ago: they are 2 versions of the same library.\r\n - Please try to update your script and use only `datasets` (`nlp` name is no longer in use and is out of date).",
"i am also facing this issue\r\n\r\n\r\n```\r\n----> 1 import datasets\r\n 3 dataset = datasets.load_dataset(\"ucberkeley-dlab/measuring-hate-speech\", \"binary\")\r\n 4 df = dataset[\"train\"].to_pandas()\r\n\r\nFile ~/.pyenv/versions/3.10.9/lib/python3.10/site-packages/datasets/__init__.py:52\r\n 50 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled\r\n 51 from .info import DatasetInfo, MetricInfo\r\n---> 52 from .inspect import (\r\n 53 get_dataset_config_info,\r\n 54 get_dataset_config_names,\r\n 55 get_dataset_infos,\r\n 56 get_dataset_split_names,\r\n 57 inspect_dataset,\r\n 58 inspect_metric,\r\n 59 list_datasets,\r\n 60 list_metrics,\r\n 61 )\r\n 62 from .iterable_dataset import IterableDataset\r\n 63 from .load import load_dataset, load_dataset_builder, load_from_disk, load_metric\r\n\r\nFile ~/.pyenv/versions/3.10.9/lib/python3.10/site-packages/datasets/inspect.py:30\r\n 28 from .download.streaming_download_manager import StreamingDownloadManager\r\n...\r\n---> 16 logger = datasets.utils.logging.get_logger(__name__)\r\n 19 if datasets.config.PYARROW_VERSION.major >= 7:\r\n 21 def pa_table_to_pylist(table):\r\n```",
"I am facing the same question. And this happens when i installing `evaluate` package while `jupyter notebook` running. I'm not sure if the error occured because of trying to import the package installed when the notebook is running. Surpringly when i stop the notebook and rerun, the issue has been solved itself. Hope this will be helpful : )",
"I also got this error.\r\nIt helped me to find the python process and kill it, then restart the kernel and the error disappeared.",
"> I also got this error. It helped me to find the python process and kill it, then restart the kernel and the error disappeared.\r\n\r\nYes!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5521/comments | https://api.github.com/repos/huggingface/datasets/issues/5521/events | https://github.com/huggingface/datasets/pull/5521 | 1,578,418,289 | PR_kwDODunzps5JpWnp | 5,521 | Fix bug when casting empty array to class labels | [] | closed | false | null | 1 | 2023-02-09T18:47:59Z | 2023-02-13T20:40:48Z | 2023-02-12T11:17:17Z | null | Fix https://github.com/huggingface/datasets/issues/5520. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5521/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5521/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5521.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5521",
"merged_at": "2023-02-12T11:17:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5521.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5521"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4861/comments | https://api.github.com/repos/huggingface/datasets/issues/4861/events | https://github.com/huggingface/datasets/issues/4861 | 1,343,260,220 | I_kwDODunzps5QEIY8 | 4,861 | Using disk for memory with the method `from_dict` | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2022-08-18T15:18:18Z | 2023-01-26T18:36:28Z | null | null | **Is your feature request related to a problem? Please describe.**
I start with an empty dataset. In a loop, at each iteration, I create a new dataset with the method `from_dict` (based on some data I load) and I concatenate this new dataset with the one at the previous iteration. After some iterations, I have an OOM error.
**Describe the solution you'd like**
The method `from_dict` loads the data in RAM. It could be good to add an option to use the disk instead.
**Describe alternatives you've considered**
To solve the problem, I have to do an intermediate step where I save the new datasets at each iteration with `save_to_disk`. Once it's done, I open them all and concatenate them.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4861/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4861/timeline | null | null | null | null | false | [
"This issue was also causing an OOM in @nateraw 's workflow and shows again that behavior is confusing - we should definitely switch to using the disk IMO"
] |
https://api.github.com/repos/huggingface/datasets/issues/4103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4103/comments | https://api.github.com/repos/huggingface/datasets/issues/4103/events | https://github.com/huggingface/datasets/pull/4103 | 1,193,987,104 | PR_kwDODunzps41s3T4 | 4,103 | Add the `GSM8K` dataset | [] | closed | false | null | 2 | 2022-04-06T04:07:52Z | 2022-04-12T15:38:28Z | 2022-04-12T10:21:16Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4103/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4103/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4103.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4103",
"merged_at": "2022-04-12T10:21:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4103.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4103"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI is failing because it's outdated, but the task tags are updated on `master`, merging :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4877/comments | https://api.github.com/repos/huggingface/datasets/issues/4877/events | https://github.com/huggingface/datasets/pull/4877 | 1,348,246,755 | PR_kwDODunzps49qF-w | 4,877 | Fix documentation card of covid_qa_castorini dataset | [] | closed | false | null | 1 | 2022-08-23T16:52:33Z | 2022-08-23T18:05:01Z | 2022-08-23T18:05:00Z | null | Fix documentation card of covid_qa_castorini dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4877/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4877/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4877.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4877",
"merged_at": "2022-08-23T18:05:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4877.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4877"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4877). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/5878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5878/comments | https://api.github.com/repos/huggingface/datasets/issues/5878/events | https://github.com/huggingface/datasets/issues/5878 | 1,718,203,843 | I_kwDODunzps5mabXD | 5,878 | Prefetching for IterableDataset | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 5 | 2023-05-20T15:25:40Z | 2023-06-01T17:40:00Z | null | null | ### Feature request
Add support for prefetching the next n batches through iterabledataset to reduce batch loading bottleneck in training loop.
### Motivation
The primary motivation behind this is to use hardware accelerators alongside a streaming dataset. This is required when you are in a low ram or low disk space setting as well as quick iteration where you're iterating though different accelerator environments (e.x changing ec2 instances quickly to figure out batch/sec for a particular architecture).
Currently, using the IterableDataset results in accelerators becoming basically useless due to the massive bottleneck induced by the dataset lazy loading/transform/mapping.
I've considered two alternatives:
PyTorch dataloader that handles this. However, I'm using jax, and I believe this is a piece of functionality that should live in the stream class.
Replicating the "num_workers" part of the PyTorch DataLoader to eagerly load batches and apply the transform so Arrow caching will automatically cache results and make them accessible.
### Your contribution
I may or may not have time to do this. Currently, I've written the basic multiprocessor approach to handle the eager DataLoader for my own use case with code that's not integrated to datasets. I'd definitely see this as being the default over the regular Dataset for most people given that they wouldn't have to wait on the datasets while also not worrying about performance. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5878/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5878/timeline | null | null | null | null | false | [
"Very cool! Do you have a link to the code that you're using to eagerly fetch the data? Would also be interested in hacking around something here for pre-fetching iterable datasets",
"I ended up just switching back to the pytorch dataloader and using it's multiprocessing functionality to handle this :(. I'm just not that familiar with python multiprocessing to get something to work in jupyter (kept having weird behaviors happening with zombies living after the cell finished).",
"Ultimately settled on using webdataset to circumvent huggingface datasets entirely. Would definitely switch back if: https://github.com/huggingface/datasets/issues/5337 was resolved.",
"Hi! You can combine `datasets` with `torchdata` to prefetch `IterableDataset`'s samples:\r\n```python\r\nfrom datasets import load_dataset\r\nfrom torchdata.datapipes.iter import IterableWrapper, HuggingFaceHubReader\r\nfrom torch.utils.data import DataLoader\r\n\r\nds = load_dataset(\"sst\", split=\"train\", streaming=True)\r\n# processing...\r\ndp = IterableWrapper(ds)\r\ndp = dp.prefetch(100)\r\ndl = DataLoader(dp, batch_size=8)\r\n\r\ni = iter(dl)\r\nnext(i)\r\n```",
"Hey @mariosasko! Thanks for the tip here - introducing prefetch with `torchdata` didn't really give me any performance difference vs not prefetching, but the concept is definitely one that could be really beneficial. Are there any benchmarks that show the speed-up you can get with `torchdata`'s prefetch just for comparison?"
] |
https://api.github.com/repos/huggingface/datasets/issues/5268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5268/comments | https://api.github.com/repos/huggingface/datasets/issues/5268/events | https://github.com/huggingface/datasets/pull/5268 | 1,455,633,978 | PR_kwDODunzps5DPIsp | 5,268 | Sharded save_to_disk + multiprocessing | [] | closed | false | null | 4 | 2022-11-18T18:50:01Z | 2022-12-14T18:25:52Z | 2022-12-14T18:22:58Z | null | Added `num_shards=` and `num_proc=` to `save_to_disk()`
EDIT: also added `max_shard_size=` to `save_to_disk()`, and also `num_shards=` to `push_to_hub`
I also:
- deprecated the fs parameter in favor of storage_options (for consistency with the rest of the lib) in save_to_disk and load_from_disk
- always embed the image/audio data in arrow when doing `save_to_disk`
- added a tqdm bar in `save_to_disk`
- Use the MockFileSystem in tests for `save_to_disk` and `load_from_disk`
- removed the unused integration tests with S3, since we can now test with `mockfs` instead of `s3fs`
TODO:
- [x] implem save_to_disk for dataset dict
- [x] save_to_disk for dataset dict tests
- [x] deprecate fs in dataset dict load_from_disk as well
- [x] update docs
Close #5263
Close https://github.com/huggingface/datasets/issues/4196
Close https://github.com/huggingface/datasets/issues/4351 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5268/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5268/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5268.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5268",
"merged_at": "2022-12-14T18:22:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5268.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5268"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Added both num_shards and max_shard_size in push_to_hub/save_to_disk. Will take care of updating the tests later",
"It's ready for a final review @mariosasko and @albertvillanova, let me know what you think :)",
"Took your comments into account, and also changed `iflatmap_unordered` to take an iterable of kwargs to make the code more redable :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5257/comments | https://api.github.com/repos/huggingface/datasets/issues/5257/events | https://github.com/huggingface/datasets/pull/5257 | 1,452,656,891 | PR_kwDODunzps5DFENm | 5,257 | remove an unused statement | [] | closed | false | null | 0 | 2022-11-17T04:00:50Z | 2022-11-18T11:04:08Z | 2022-11-18T11:04:08Z | null | remove the unused statement: `input_pairs = list(zip())` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5257/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5257/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5257.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5257",
"merged_at": "2022-11-18T11:04:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5257.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5257"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4886/comments | https://api.github.com/repos/huggingface/datasets/issues/4886/events | https://github.com/huggingface/datasets/issues/4886 | 1,349,285,569 | I_kwDODunzps5QbHbB | 4,886 | Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 9 | 2022-08-24T11:24:21Z | 2023-02-02T02:40:53Z | null | null | ## Describe the bug
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('huggan/CelebA-HQ')
```
## Expected results
See https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#scrollTo=N3ml_7f8kzDd
## Actual results
```
File "/home/jean/projects/cold_diffusion/celebA.py", line 4, in <module>
dataset = load_dataset('huggan/CelebA-HQ')
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/load.py", line 1793, in load_dataset
builder_instance.download_and_prepare(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/builder.py", line 1274, in _prepare_split
for key, table in logging.tqdm(
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in _generate_tables
parquet_file = pq.ParquetFile(f)
File "/home/jean/miniconda3/envs/seq/lib/python3.10/site-packages/pyarrow/parquet/__init__.py", line 286, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-2.4.1.dev0
- Platform: Ubuntu 18.04
- Python version: 3.10
- PyArrow version: pyarrow 9.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4886/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4886/timeline | null | null | null | null | false | [
"Hi! IIRC one of the files in this dataset is corrupted due to https://github.com/huggingface/datasets/pull/4081 (fixed now).\r\n\r\n@NielsRogge Could you please re-generate and re-push this dataset (or I can do it if you share the generation script)?",
"Could you put something in place to catch these problems? I'm seeing this on another dataset consistently too and I guess I can't fix it in code?",
"Hey,\r\n\r\nYes the notebook I used to upload this dataset can be found here: https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing.\r\n\r\nIf you have time to regenerate the dataset, would be great.",
"Sorry, maybe I wasn't clear enough that it's a different dataset `laion2B-multi-joined-translated-to-en`. I think there should be checks in the upload, tests on the server, or validation after download (hashes) to catch these problems.\r\n\r\nLots of bandwidth wasted otherwise! /cc @mariosasko",
"Yes @alexjc sorry was more a reply to @JeanKaddour.\r\n\r\nAnd indeed it'd be great to have additional checks to avoid these errors. ",
"cc @severo since such checks should probably be implemented on the datasets-server side.",
"Hi,\r\n\r\nIt seems the problem is still persist. I have encountered the exact same problem using just 2 line of code above. \r\n\r\nThe error code is as follows:\r\n\r\n```\r\n發生例外狀況: DatasetGenerationError\r\nAn error occurred while generating the dataset\r\npyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\n File \"/code/ddpm_learn/train.py\", line 65, in <module>\r\n dataset = load_dataset(\"huggan/CelebA-HQ\", cache_dir=\"./CelebA-HQ\"\r\ndatasets.builder.DatasetGenerationError: An error occurred while generating the dataset\r\n```",
"Yes for the moment refer to the notebook linked above if you want to create a HF dataset yourself",
"Hi @NielsRogge ,\r\nI can help to push the dataset to the cloud. However, I cannot locate the situation so far. I wonder if \r\n1. the downloaded files so far has corruption s.t. the file cannot generate properly, or\r\n2. the downloaded files has no bug, the bug is caused by buggy upload program so that I can use what I have just downloaded to re-upload to cloud\r\n\r\nThank, \r\nAllan"
] |
https://api.github.com/repos/huggingface/datasets/issues/2568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2568/comments | https://api.github.com/repos/huggingface/datasets/issues/2568/events | https://github.com/huggingface/datasets/pull/2568 | 932,934,795 | MDExOlB1bGxSZXF1ZXN0NjgwMjE5MDU2 | 2,568 | Add interleave_datasets for map-style datasets | [] | closed | false | null | 0 | 2021-06-29T17:19:24Z | 2021-07-01T09:33:34Z | 2021-07-01T09:33:33Z | null | ### Add interleave_datasets for map-style datasets
Add support for map-style datasets (i.e. `Dataset` objects) in `interleave_datasets`.
It was only supporting iterable datasets (i.e. `IterableDataset` objects).
### Implementation details
It works by concatenating the datasets and then re-order the indices to make the new dataset.
### TODO
- [x] tests
- [x] docs
Close #2563 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2568/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2568/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2568.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2568",
"merged_at": "2021-07-01T09:33:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2568.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2568"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5079/comments | https://api.github.com/repos/huggingface/datasets/issues/5079/events | https://github.com/huggingface/datasets/pull/5079 | 1,398,609,305 | PR_kwDODunzps5AQemi | 5,079 | refactor: replace AssertionError with more meaningful exceptions (#5074) | [] | closed | false | null | 1 | 2022-10-06T01:39:35Z | 2022-10-07T14:35:43Z | 2022-10-07T14:33:10Z | null | Closes #5074
Replaces `AssertionError` in the following files with more descriptive exceptions:
- `src/datasets/arrow_reader.py`
- `src/datasets/builder.py`
- `src/datasets/utils/version.py`
The issue listed more files that needed to be fixed, but the rest of them were contained in the top-level `datasets` directory, which was removed when #4974 was merged | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5079/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5079/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5079.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5079",
"merged_at": "2022-10-07T14:33:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5079.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5079"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3/comments | https://api.github.com/repos/huggingface/datasets/issues/3/events | https://github.com/huggingface/datasets/issues/3 | 600,180,050 | MDU6SXNzdWU2MDAxODAwNTA= | 3 | [Feature] More dataset outputs | [] | closed | false | null | 3 | 2020-04-15T10:08:14Z | 2020-05-04T06:12:27Z | 2020-05-04T06:12:27Z | null | Add the following dataset outputs:
- Spark
- Pandas | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3/timeline | null | completed | null | null | false | [
"Yes!\r\n- pandas will be a one-liner in `arrow_dataset`: https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.to_pandas\r\n- for Spark I have no idea. let's investigate that at some point",
"For Spark it looks to be pretty straightforward as well https://spark.apache.org/docs/latest/sql-pyspark-pandas-with-arrow.html but looks to be having a dependency to Spark is necessary, then nevermind we can skip it",
"Now Pandas is available."
] |
https://api.github.com/repos/huggingface/datasets/issues/17 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/17/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/17/comments | https://api.github.com/repos/huggingface/datasets/issues/17/events | https://github.com/huggingface/datasets/pull/17 | 605,753,027 | MDExOlB1bGxSZXF1ZXN0NDA4MDk3NjM0 | 17 | Add Pandas as format type | [] | closed | false | null | 0 | 2020-04-23T18:20:14Z | 2020-04-27T18:07:50Z | 2020-04-27T18:07:48Z | null | As detailed in the title ^^ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/17/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/17/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/17.diff",
"html_url": "https://github.com/huggingface/datasets/pull/17",
"merged_at": "2020-04-27T18:07:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/17.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/17"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5574/comments | https://api.github.com/repos/huggingface/datasets/issues/5574/events | https://github.com/huggingface/datasets/issues/5574 | 1,598,104,691 | I_kwDODunzps5fQSRz | 5,574 | c4 dataset streaming fails with `FileNotFoundError` | [] | closed | false | null | 9 | 2023-02-24T07:57:32Z | 2023-03-06T13:14:11Z | 2023-02-27T04:03:38Z | null | ### Describe the bug
Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("c4", "en", split="train", streaming=True)
next(iter(dataset))
```
causes a
```
FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-train.00000-of-01024.json.gz
```
I can download this file manually though e.g. by entering this URL in a browser.
There is an underlying HTTP 403 status code:
```
aiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://cdn-lfs.huggingface.co/datasets/allenai/c4/8ef8d75b0e045dec4aa5123a671b4564466b0707086a7ed1ba8721626dfffbc9?response-content-disposition=attachment%3B+filename*%3DUTF-8''c4-train.00000-of-01024.json.gz%3B+filename%3D%22c4-train.00000-of-01024.json.gz%22%3B&response-content-type=application/gzip&Expires=1677483770&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL2RhdGFzZXRzL2FsbGVuYWkvYzQvOGVmOGQ3NWIwZTA0NWRlYzRhYTUxMjNhNjcxYjQ1NjQ0NjZiMDcwNzA4NmE3ZWQxYmE4NzIxNjI2ZGZmZmJjOT9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPWFwcGxpY2F0aW9uJTJGZ3ppcCIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY3NzQ4Mzc3MH19fV19&Signature=yjL3UeY72cf2xpnvPvD68eAYOEe2qtaUJV55sB-jnPskBJEMwpMJcBZvg2~GqXZdM3O-GWV-Z3CI~d4u5VCb4YZ-HlmOjr3VBYkvox2EKiXnBIhjMecf2UVUPtxhTa9kBVlWjqu4qKzB9gKXZF2Cwpp5ctLzapEaT2nnqF84RAL-rsqMA3I~M8vWWfivQsbBK63hMfgZqqKMgdWM0iKMaItveDl0ufQ29azMFmsR7qd8V7sU2Z-F1fAeohS8HpN9OOnClW34yi~YJ2AbgZJJBXA~qsylfVA0Qp7Q~yX~q4P8JF1vmJ2BjkiSbGrj3bAXOGugpOVU5msI52DT88yMdA__&Key-Pair-Id=KVTP0A1DKRTAX')
```
### Expected behavior
This should retrieve the first example from the C4 validation set. This worked a few days ago but stopped working now.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.31
- Python version: 3.9.16
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5574/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5574/timeline | null | completed | null | null | false | [
"Also encountering this issue for every dataset I try to stream! Installed datasets from main:\r\n```\r\n- `datasets` version: 2.10.1.dev0\r\n- Platform: macOS-13.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 10.0.1\r\n- Pandas version: 1.5.2\r\n```\r\n\r\nRepro:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nspigi = load_dataset(\"kensho/spgispeech\", \"dev\", split=\"validation\", streaming=True, use_auth_token=True)\r\nsample = next(iter(spigi))\r\n```\r\n\r\n<details>\r\n<summary> Traceback </summary>\r\n\r\n```python\r\n---------------------------------------------------------------------------\r\nClientResponseError Traceback (most recent call last)\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:407, in HTTPFileSystem._info(self, url, **kwargs)\r\n 405 try:\r\n 406 info.update(\r\n--> 407 await _file_info(\r\n 408 self.encode_url(url),\r\n 409 size_policy=policy,\r\n 410 session=session,\r\n 411 **self.kwargs,\r\n 412 **kwargs,\r\n 413 )\r\n 414 )\r\n 415 if info.get(\"size\") is not None:\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:792, in _file_info(url, session, size_policy, **kwargs)\r\n 791 async with r:\r\n--> 792 r.raise_for_status()\r\n 794 # TODO:\r\n 795 # recognise lack of 'Accept-Ranges',\r\n 796 # or 'Accept-Ranges': 'none' (not 'bytes')\r\n 797 # to mean streaming only, no random access => return None\r\n\r\nFile ~/venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py:1005, in ClientResponse.raise_for_status(self)\r\n 1004 self.release()\r\n-> 1005 raise ClientResponseError(\r\n 1006 self.request_info,\r\n 1007 self.history,\r\n 1008 status=self.status,\r\n 1009 message=self.reason,\r\n 1010 headers=self.headers,\r\n 1011 )\r\n\r\nClientResponseError: 403, message='Forbidden', url=URL('[https://cdn-lfs.huggingface.co/repos/e2/89/e28905247d6f48bb4edad5baf9b1bb4158e897a13fdf18bf3b8ee89ff8387ab8/46eca7431a7b6bad344bf451800e5b10cea1dd168f26d1027a6d9eb374b7fac3?response-content-disposition=attachment%3B+filename*%3DUTF-8''dev.csv%3B+filename%3D%22dev.csv%22%3B&response-content-type=text/csv&Expires=1677494732&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zL2UyLzg5L2UyODkwNTI0N2Q2ZjQ4YmI0ZWRhZDViYWY5YjFiYjQxNThlODk3YTEzZmRmMThiZjNiOGVlODlmZjgzODdhYjgvNDZlY2E3NDMxYTdiNmJhZDM0NGJmNDUxODAwZTViMTBjZWExZGQxNjhmMjZkMTAyN2E2ZDllYjM3NGI3ZmFjMz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPXRleHQlMkZjc3YiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2Nzc0OTQ3MzJ9fX1dfQ__&Signature=EzQB9f7xPckvqfFB6LzcyR-wzTnQCqtPDdWtQUzZ3QJ-gY-IHG5mxQITJgMr1nVTbJZrPmGAaDngMcPFUfSQa8RmCqYH~dZl-UGE8CO4neKNUT1DvA2WEvLDS4WaAJ3SN-9rX0uFb03~c1QS78cIgIRboYvf6ugKiJz86Bd7Vs~tcp201JFR0A6jIMseqApOnkb9d8dHMP3Ny~F6gO3Qf2QpEWM-QsDIyw2Kz2QV55nq8TsDpRYZCZo50~WwD~73Hej0PoDhEA1K37d19pa0CQhkaN-gjCrbT9xLabbvhJWa~ZkWcMdD0teCgjYqv1wKyvFXDAxukxLGEc7OBXVbYw__&Key-Pair-Id=KVTP0A1DKRTAX](https://cdn-lfs.huggingface.co/repos/e2/89/e28905247d6f48bb4edad5baf9b1bb4158e897a13fdf18bf3b8ee89ff8387ab8/46eca7431a7b6bad344bf451800e5b10cea1dd168f26d1027a6d9eb374b7fac3?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27dev.csv%3B+filename%3D%22dev.csv%22%3B&response-content-type=text/csv&Expires=1677494732&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zL2UyLzg5L2UyODkwNTI0N2Q2ZjQ4YmI0ZWRhZDViYWY5YjFiYjQxNThlODk3YTEzZmRmMThiZjNiOGVlODlmZjgzODdhYjgvNDZlY2E3NDMxYTdiNmJhZDM0NGJmNDUxODAwZTViMTBjZWExZGQxNjhmMjZkMTAyN2E2ZDllYjM3NGI3ZmFjMz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPXRleHQlMkZjc3YiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2Nzc0OTQ3MzJ9fX1dfQ__&Signature=EzQB9f7xPckvqfFB6LzcyR-wzTnQCqtPDdWtQUzZ3QJ-gY-IHG5mxQITJgMr1nVTbJZrPmGAaDngMcPFUfSQa8RmCqYH~dZl-UGE8CO4neKNUT1DvA2WEvLDS4WaAJ3SN-9rX0uFb03~c1QS78cIgIRboYvf6ugKiJz86Bd7Vs~tcp201JFR0A6jIMseqApOnkb9d8dHMP3Ny~F6gO3Qf2QpEWM-QsDIyw2Kz2QV55nq8TsDpRYZCZo50~WwD~73Hej0PoDhEA1K37d19pa0CQhkaN-gjCrbT9xLabbvhJWa~ZkWcMdD0teCgjYqv1wKyvFXDAxukxLGEc7OBXVbYw__&Key-Pair-Id=KVTP0A1DKRTAX)')\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nFileNotFoundError Traceback (most recent call last)\r\nCell In[5], line 4\r\n 1 from datasets import load_dataset\r\n 3 spigi = load_dataset(\"kensho/spgispeech\", \"dev\", split=\"validation\", streaming=True)\r\n----> 4 sample = next(iter(spigi))\r\n\r\nFile ~/datasets/src/datasets/iterable_dataset.py:937, in IterableDataset.__iter__(self)\r\n 934 yield from self._iter_pytorch(ex_iterable)\r\n 935 return\r\n--> 937 for key, example in ex_iterable:\r\n 938 if self.features:\r\n 939 # `IterableDataset` automatically fills missing columns with None.\r\n 940 # This is done with `_apply_feature_types_on_example`.\r\n 941 yield _apply_feature_types_on_example(\r\n 942 example, self.features, token_per_repo_id=self._token_per_repo_id\r\n 943 )\r\n\r\nFile ~/datasets/src/datasets/iterable_dataset.py:113, in ExamplesIterable.__iter__(self)\r\n 112 def __iter__(self):\r\n--> 113 yield from self.generate_examples_fn(**self.kwargs)\r\n\r\nFile ~/.cache/huggingface/modules/datasets_modules/datasets/kensho--spgispeech/5fbf75dd9ef795a9b5a673457d2cbaf0b8fa0de8fb62acbd1da338d83a41e2f0/spgispeech.py:186, in Spgispeech._generate_examples(self, local_extracted_archive_paths, archives, meta_path)\r\n 183 dict_keys = [\"wav_filename\", \"wav_filesize\", \"transcript\"]\r\n 185 logging.info(\"Reading metadata...\")\r\n--> 186 with open(meta_path, encoding=\"utf-8\") as f:\r\n 187 csvreader = csv.DictReader(f, delimiter=\"|\")\r\n 188 metadata = {x[\"wav_filename\"]: dict((k, x[k]) for k in dict_keys) for x in csvreader}\r\n\r\nFile ~/datasets/src/datasets/streaming.py:70, in extend_module_for_streaming.<locals>.wrap_auth.<locals>.wrapper(*args, **kwargs)\r\n 68 @wraps(function)\r\n 69 def wrapper(*args, **kwargs):\r\n---> 70 return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n\r\nFile ~/datasets/src/datasets/download/streaming_download_manager.py:495, in xopen(file, mode, use_auth_token, *args, **kwargs)\r\n 493 kwargs = {**kwargs, **new_kwargs}\r\n 494 try:\r\n--> 495 file_obj = fsspec.open(file, mode=mode, *args, **kwargs).open()\r\n 496 except ValueError as e:\r\n 497 if str(e) == \"Cannot seek streaming HTTP file\":\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/core.py:135, in OpenFile.open(self)\r\n 128 def open(self):\r\n 129 \"\"\"Materialise this as a real open file without context\r\n 130 \r\n 131 The OpenFile object should be explicitly closed to avoid enclosed file\r\n 132 instances persisting. You must, therefore, keep a reference to the OpenFile\r\n 133 during the life of the file-like it generates.\r\n 134 \"\"\"\r\n--> 135 return self.__enter__()\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/core.py:103, in OpenFile.__enter__(self)\r\n 100 def __enter__(self):\r\n 101 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 103 f = self.fs.open(self.path, mode=mode)\r\n 105 self.fobjects = [f]\r\n 107 if self.compression is not None:\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/spec.py:1106, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1104 else:\r\n 1105 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1106 f = self._open(\r\n 1107 path,\r\n 1108 mode=mode,\r\n 1109 block_size=block_size,\r\n 1110 autocommit=ac,\r\n 1111 cache_options=cache_options,\r\n 1112 **kwargs,\r\n 1113 )\r\n 1114 if compression is not None:\r\n 1115 from fsspec.compression import compr\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:346, in HTTPFileSystem._open(self, path, mode, block_size, autocommit, cache_type, cache_options, size, **kwargs)\r\n 344 kw[\"asynchronous\"] = self.asynchronous\r\n 345 kw.update(kwargs)\r\n--> 346 size = size or self.info(path, **kwargs)[\"size\"]\r\n 347 session = sync(self.loop, self.set_session)\r\n 348 if block_size and size:\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/asyn.py:113, in sync_wrapper.<locals>.wrapper(*args, **kwargs)\r\n 110 @functools.wraps(func)\r\n 111 def wrapper(*args, **kwargs):\r\n 112 self = obj or args[0]\r\n--> 113 return sync(self.loop, func, *args, **kwargs)\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/asyn.py:98, in sync(loop, func, timeout, *args, **kwargs)\r\n 96 raise FSTimeoutError from return_result\r\n 97 elif isinstance(return_result, BaseException):\r\n---> 98 raise return_result\r\n 99 else:\r\n 100 return return_result\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/asyn.py:53, in _runner(event, coro, result, timeout)\r\n 51 coro = asyncio.wait_for(coro, timeout=timeout)\r\n 52 try:\r\n---> 53 result[0] = await coro\r\n 54 except Exception as ex:\r\n 55 result[0] = ex\r\n\r\nFile ~/venv/lib/python3.9/site-packages/fsspec/implementations/http.py:420, in HTTPFileSystem._info(self, url, **kwargs)\r\n 417 except Exception as exc:\r\n 418 if policy == \"get\":\r\n 419 # If get failed, then raise a FileNotFoundError\r\n--> 420 raise FileNotFoundError(url) from exc\r\n 421 logger.debug(str(exc))\r\n 423 return {\"name\": url, \"size\": None, **info, \"type\": \"file\"}\r\n\r\nFileNotFoundError: https://huggingface.co/datasets/kensho/spgispeech/resolve/main/data/meta/dev.csv\r\n```\r\n</details>",
"Hi ! We're investigating this issue, sorry for the inconvenience",
"This has been resolved ! Thanks for reporting",
"Wow, thanks for the very quick fix!",
"This problem now appears again, this time with an underlying HTTP 502 status code:\r\n\r\n```\r\naiohttp.client_exceptions.ClientResponseError: 502, message='Bad Gateway', url=URL('https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/en/c4-validation.00002-of-00008.json.gz')\r\n```",
"Re-executing a minute later, the underlying cause is an HTTP 403 status code, as reported yesterday:\r\n\r\n```\r\naiohttp.client_exceptions.ClientResponseError: 403, message='Forbidden', url=URL('https://cdn-lfs.huggingface.co/datasets/allenai/c4/4bf6b248b0f910dcde2cdf2118d6369d8208c8f9515ec29ab73e531f380b18e2?response-content-disposition=attachment%3B+filename*%3DUTF-8''c4-validation.00002-of-00008.json.gz%3B+filename%3D%22c4-validation.00002-of-00008.json.gz%22%3B&response-content-type=application/gzip&Expires=1677571273&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL2RhdGFzZXRzL2FsbGVuYWkvYzQvNGJmNmIyNDhiMGY5MTBkY2RlMmNkZjIxMThkNjM2OWQ4MjA4YzhmOTUxNWVjMjlhYjczZTUzMWYzODBiMThlMj9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSomcmVzcG9uc2UtY29udGVudC10eXBlPWFwcGxpY2F0aW9uJTJGZ3ppcCIsIkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTY3NzU3MTI3M319fV19&Signature=WW42NOKkLuX~xVB1QfbkqzdvGo2AOXpgbF3PjTXy6iKd~ffilr1N9ScPXfvTXqy5yvdhJg1G0xJy1zYtUjGAL8GEx3Av-0vIhpWMGYTM8XKEU5gYA9qt30oVtNph6TkTYSABrsYTaj-hzQL9WCgyapmjvG69ETMh4wj44r2rcbk4T3j0l6l4u76Gh~lyRSll3aK4qycdUwcyL7FECDu~0W1mJIJwKkCrWHhSpHJSshb-0ElwG71pq4eyQ5g2uxHdK6JbRF7loxUpRQQJ1vlk0EHXdw0wTMaQ9tqHy6xcrQd8Ep0Yvx3tUD8MR0vWOcbQKnL6LwPQByc8tkChlpjnig__&Key-Pair-Id=KVTP0A1DKRTAX')\r\n```",
"I'm facing the same problem. Interestingly using `wget` I can download the file. ",
"It's been resolved again ;)",
"> It's been resolved again ;)\r\n\r\nI'm experiencing the same issue when trying to load this dataset, `FileNotFoundError: https://huggingface.co/datasets/allenai/c4/resolve/1ddc917116b730e1859edef32896ec5c16be51d0/realnewslike/c4-train.00000-of-00512.json.gz`"
] |
https://api.github.com/repos/huggingface/datasets/issues/1177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1177/comments | https://api.github.com/repos/huggingface/datasets/issues/1177/events | https://github.com/huggingface/datasets/pull/1177 | 757,778,684 | MDExOlB1bGxSZXF1ZXN0NTMzMDkxMTQ3 | 1,177 | Add Korean NER dataset | [] | closed | false | null | 1 | 2020-12-05T20:56:00Z | 2020-12-06T20:19:48Z | 2020-12-06T20:19:48Z | null | This PR adds the [Korean named entity recognition dataset](https://github.com/kmounlp/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https://github.com/SKTBrain/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https://github.com/eagle705/pytorch-bert-crf-ner). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1177/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1177/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1177.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1177",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1177.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1177"
} | true | [
"Closed via #1219 "
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.