url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5505/comments | https://api.github.com/repos/huggingface/datasets/issues/5505/events | https://github.com/huggingface/datasets/issues/5505 | 1,571,720,814 | I_kwDODunzps5dro5u | 5,505 | PyTorch BatchSampler still loads from Dataset one-by-one | [] | closed | false | null | 2 | 2023-02-06T01:14:55Z | 2023-02-19T18:27:30Z | 2023-02-19T18:27:30Z | null | ### Describe the bug
In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue.
I'm not sure if this is a mistake in the docs or the code, but it seems that the only way for a Dataset to be passed a list of indexes by PyTorch (instead of one index at a time) is to define a `__getitems__` method (note the plural) on the Dataset object, and since the HF Dataset doesn't have this, PyTorch executes [this line of code](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/fetch.py#L58), reverting to fetching one-by-one.
### Steps to reproduce the bug
You can put a breakpoint in `Dataset.__getitem__()` or just print the args from there and see that it's called multiple times for a single `next(iter(dataloader))`, even when using the code from the docs:
```py
from torch.utils.data.sampler import BatchSampler, RandomSampler
batch_sampler = BatchSampler(RandomSampler(ds), batch_size=32, drop_last=False)
dataloader = DataLoader(ds, batch_sampler=batch_sampler)
```
### Expected behavior
The expected behaviour would be for it to fetch batches from the dataset, rather than one-by-one.
To demonstrate that there is room for improvement: once I have a HF dataset `ds`, if I just add this line:
```py
ds.__getitems__ = ds.__getitem__
```
...then the time taken to loop over the dataset improves considerably (for wikitext-103, from one minute to 13 seconds with batch size 32). Probably not a big deal in the grand scheme of things, but seems like an easy win.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5505/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5505/timeline | null | completed | null | null | false | [
"This change seems to come from a few months ago in the PyTorch side. That's good news and it means we may not need to pass a batch_sampler as soon as we add `Dataset.__getitems__` to get the optimal speed :)\r\n\r\nThanks for reporting ! Would you like to open a PR to add `__getitems__` and remove this outdated documentation ?",
"Yeah I figured this was the sort of thing that probably once worked. I can confirm that you no longer need the batch sampler, just `batch_size=n` in the `DataLoader`.\r\n\r\nI'll pass on the PR, I'm flat out right now, sorry."
] |
https://api.github.com/repos/huggingface/datasets/issues/5774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5774/comments | https://api.github.com/repos/huggingface/datasets/issues/5774/events | https://github.com/huggingface/datasets/pull/5774 | 1,676,716,662 | PR_kwDODunzps5OxIMe | 5,774 | Fix style | [] | closed | false | null | 2 | 2023-04-20T13:21:32Z | 2023-04-20T13:34:26Z | 2023-04-20T13:24:28Z | null | Fix C419 issues | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5774/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5774/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5774.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5774",
"merged_at": "2023-04-20T13:24:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5774.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5774"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010336 / 0.011353 (-0.001017) | 0.007085 / 0.011008 (-0.003924) | 0.135577 / 0.038508 (0.097069) | 0.038301 / 0.023109 (0.015192) | 0.427919 / 0.275898 (0.152021) | 0.461451 / 0.323480 (0.137971) | 0.008929 / 0.007986 (0.000944) | 0.005260 / 0.004328 (0.000931) | 0.103481 / 0.004250 (0.099231) | 0.054885 / 0.037052 (0.017833) | 0.434956 / 0.258489 (0.176467) | 0.466915 / 0.293841 (0.173074) | 0.052403 / 0.128546 (-0.076144) | 0.021128 / 0.075646 (-0.054518) | 0.466847 / 0.419271 (0.047576) | 0.085096 / 0.043533 (0.041563) | 0.439935 / 0.255139 (0.184796) | 0.453613 / 0.283200 (0.170413) | 0.123913 / 0.141683 (-0.017769) | 1.930114 / 1.452155 (0.477959) | 2.052083 / 1.492716 (0.559366) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.280612 / 0.018006 (0.262606) | 0.583937 / 0.000490 (0.583447) | 0.004542 / 0.000200 (0.004342) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035901 / 0.037411 (-0.001510) | 0.160357 / 0.014526 (0.145831) | 0.141661 / 0.176557 (-0.034896) | 0.234915 / 0.737135 (-0.502220) | 0.164110 / 0.296338 (-0.132228) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659901 / 0.215209 (0.444692) | 6.529102 / 2.077655 (4.451447) | 2.635324 / 1.504120 (1.131204) | 2.275777 / 1.541195 (0.734583) | 2.343205 / 1.468490 (0.874715) | 1.241310 / 4.584777 (-3.343467) | 5.683784 / 3.745712 (1.938072) | 3.377162 / 5.269862 (-1.892700) | 2.176404 / 4.565676 (-2.389273) | 0.144303 / 0.424275 (-0.279972) | 0.016352 / 0.007607 (0.008745) | 0.817383 / 0.226044 (0.591339) | 8.148356 / 2.268929 (5.879428) | 3.489277 / 55.444624 (-51.955347) | 2.848086 / 6.876477 (-4.028391) | 2.973304 / 2.142072 (0.831232) | 1.517821 / 4.805227 (-3.287407) | 0.278794 / 6.500664 (-6.221870) | 0.096385 / 0.075469 (0.020916) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631693 / 1.841788 (-0.210095) | 19.564716 / 8.074308 (11.490408) | 23.583081 / 10.191392 (13.391689) | 0.252363 / 0.680424 (-0.428061) | 0.027644 / 0.534201 (-0.506557) | 0.579634 / 0.579283 (0.000351) | 0.645702 / 0.434364 (0.211338) | 0.667302 / 0.540337 (0.126965) | 0.766425 / 1.386936 (-0.620511) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011186 / 0.011353 (-0.000167) | 0.007327 / 0.011008 (-0.003681) | 0.105441 / 0.038508 (0.066933) | 0.040293 / 0.023109 (0.017184) | 0.480557 / 0.275898 (0.204659) | 0.522049 / 0.323480 (0.198569) | 0.007779 / 0.007986 (-0.000207) | 0.007338 / 0.004328 (0.003009) | 0.104744 / 0.004250 (0.100494) | 0.059463 / 0.037052 (0.022411) | 0.494055 / 0.258489 (0.235566) | 0.534340 / 0.293841 (0.240499) | 0.062800 / 0.128546 (-0.065746) | 0.020687 / 0.075646 (-0.054959) | 0.135833 / 0.419271 (-0.283439) | 0.087472 / 0.043533 (0.043939) | 0.465019 / 0.255139 (0.209880) | 0.526713 / 0.283200 (0.243513) | 0.131424 / 0.141683 (-0.010259) | 1.884759 / 1.452155 (0.432605) | 2.015817 / 1.492716 (0.523101) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237032 / 0.018006 (0.219026) | 0.605209 / 0.000490 (0.604719) | 0.006653 / 0.000200 (0.006453) | 0.000264 / 0.000054 (0.000210) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034982 / 0.037411 (-0.002430) | 0.141409 / 0.014526 (0.126883) | 0.151635 / 0.176557 (-0.024922) | 0.217298 / 0.737135 (-0.519837) | 0.171945 / 0.296338 (-0.124393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678596 / 0.215209 (0.463387) | 6.802432 / 2.077655 (4.724777) | 3.021617 / 1.504120 (1.517497) | 2.722508 / 1.541195 (1.181313) | 2.728194 / 1.468490 (1.259704) | 1.245863 / 4.584777 (-3.338914) | 5.762676 / 3.745712 (2.016963) | 5.497855 / 5.269862 (0.227994) | 2.855764 / 4.565676 (-1.709912) | 0.157359 / 0.424275 (-0.266916) | 0.015562 / 0.007607 (0.007955) | 0.865559 / 0.226044 (0.639515) | 8.553052 / 2.268929 (6.284123) | 3.905544 / 55.444624 (-51.539081) | 3.272528 / 6.876477 (-3.603949) | 3.399481 / 2.142072 (1.257408) | 1.540155 / 4.805227 (-3.265072) | 0.275871 / 6.500664 (-6.224793) | 0.092346 / 0.075469 (0.016877) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.753646 / 1.841788 (-0.088142) | 20.074050 / 8.074308 (11.999742) | 23.920391 / 10.191392 (13.728999) | 0.257161 / 0.680424 (-0.423263) | 0.027805 / 0.534201 (-0.506396) | 0.565605 / 0.579283 (-0.013678) | 0.643277 / 0.434364 (0.208914) | 0.633504 / 0.540337 (0.093167) | 0.754317 / 1.386936 (-0.632619) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2097/comments | https://api.github.com/repos/huggingface/datasets/issues/2097/events | https://github.com/huggingface/datasets/pull/2097 | 838,105,289 | MDExOlB1bGxSZXF1ZXN0NTk4MzM4MTA3 | 2,097 | fixes issue #1110 by descending further if `obj["_type"]` is a dict | [] | closed | false | null | 0 | 2021-03-22T21:00:55Z | 2021-03-22T21:01:11Z | 2021-03-22T21:01:11Z | null | Check metrics | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2097/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2097/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2097.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2097",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2097.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2097"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2127/comments | https://api.github.com/repos/huggingface/datasets/issues/2127/events | https://github.com/huggingface/datasets/pull/2127 | 843,017,199 | MDExOlB1bGxSZXF1ZXN0NjAyNDYxMzc3 | 2,127 | make documentation more clear to use different cloud storage | [] | closed | false | null | 0 | 2021-03-29T06:24:06Z | 2021-03-29T12:16:24Z | 2021-03-29T12:16:24Z | null | This PR extends the cloud storage documentation. To show you can use a different `fsspec` implementation. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2127/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2127.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2127",
"merged_at": "2021-03-29T12:16:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2127.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2127"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1213/comments | https://api.github.com/repos/huggingface/datasets/issues/1213/events | https://github.com/huggingface/datasets/pull/1213 | 757,983,884 | MDExOlB1bGxSZXF1ZXN0NTMzMjM4NzEz | 1,213 | add taskmaster3 | [] | closed | false | null | 2 | 2020-12-06T17:56:03Z | 2020-12-09T11:05:10Z | 2020-12-09T11:00:29Z | null | Adding Taskmaster-3 dataset
https://github.com/google-research-datasets/Taskmaster/tree/master/TM-3-2020.
The dataset structure almost same as original dataset with these two changes
1. In original dataset, each `apis` has a `args` filed which is a `dict` with variable keys, which represent the name and value of the args. Here converted that to a `list` of `dict` with keys `arg_name` and `arg_value`. For ex.
```python
args = {"name.movie": "Mulan", "name.theater": ": "Mountain AMC 16"}
```
becomes
```python
[
{
"arg_name": "name.movie",
"arg_value": "Mulan"
},
{
"arg_name": "name.theater",
"arg_value": "Mountain AMC 16"
}
]
```
2. Each `apis` has a `response` which is also a `dict` with variable keys representing response name/type and it's value. As above converted it to `list` of `dict` with keys `response_name` and `response_value`.
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1213/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1213/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1213.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1213",
"merged_at": "2020-12-09T11:00:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1213.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1213"
} | true | [
"(you were unlucky, my rule of thumb for reducing the dummy data is to check whether they're above 50KB and you're at 52KB ^^')",
"> (you were unlucky, my rule of thumb for reducing the dummy data is to check whether they're above 50KB and you're at 52KB ^^')\r\n\r\nOops :(\r\n\r\nThanks for the suggestion, will reduce the size"
] |
https://api.github.com/repos/huggingface/datasets/issues/4168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4168/comments | https://api.github.com/repos/huggingface/datasets/issues/4168/events | https://github.com/huggingface/datasets/pull/4168 | 1,203,867,540 | PR_kwDODunzps42NL6F | 4,168 | Add code examples to API docs | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 4 | 2022-04-13T23:03:38Z | 2022-04-27T18:53:37Z | 2022-04-27T18:48:34Z | null | This PR adds code examples for functions related to the base Datasets class to highlight usage. Most of the examples use the `rotten_tomatoes` dataset since it is nice and small. Several things I would appreciate feedback on:
- Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer. Personally, I think we might be able to get away with not including this since users probably want to try the function on their own dataset. For example:
```py
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> code example goes here
```
- Should we showcase a function with more than one parameter to highlight different use-cases (it's pretty basic right now, but I'd be happy to add more)?
- For the `class_encode_column` function, let me know if there is a simpler dataset with fewer columns (currently using `winograd_wsc`) so it is easier for users to see what changed.
- Where possible, I try to show the input before and the output after using a function like `flatten` for example. Do you think this is too much and just showing the usage (ie, `>>> ds.flatten()`) will be sufficient?
Thanks :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4168/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4168/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4168.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4168",
"merged_at": "2022-04-27T18:48:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4168.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4168"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Do you think it is clearer to make every code example fully reproducible so when users copy the code they can actually run it and get an output? This seems quite repetitive - maybe even unnecessary - but it is definitely clearer.\r\n\r\nI think it's ok to be repetitive to get more clarity. Many users come from `transformers` and may have little experience with some processing methods (especially torch users).\r\n\r\n> Should we showcase a function with more than one parameter to highlight different use-cases (it's pretty basic right now, but I'd be happy to add more)?\r\n\r\nMaybe let's do it case by case, depending on whether there are parameters that are likely to be used often ?\r\n\r\n> For the class_encode_column function, let me know if there is a simpler dataset with fewer columns (currently using winograd_wsc) so it is easier for users to see what changed.\r\n\r\nYou can try with `boolq`, it has a boolean column that can be converted to labels\r\n\r\n> Where possible, I try to show the input before and the output after using a function like flatten for example. Do you think this is too much and just showing the usage (ie, >>> ds.flatten()) will be sufficient?\r\n\r\nNo I don't think it's too much, it's nice this way thanks :)",
"Updated each code example so they are fully reproducible (where applicable)! The next step will be to identify some functions where we can show off some parameters that are useful or commonly used. Some useful parameters can be:\r\n\r\n- use `map(batched=True)` to process batches of examples.\r\n- set a seed in `shuffle`.\r\n- set `shuffle` and `seed` in `train_test_split`.\r\n\r\nLet me know if you think of anything else related to the functions in `arrow_dataset.py`!",
"Cool thanks ! I think you can also do `num_proc` for `map`"
] |
https://api.github.com/repos/huggingface/datasets/issues/5411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5411/comments | https://api.github.com/repos/huggingface/datasets/issues/5411/events | https://github.com/huggingface/datasets/pull/5411 | 1,523,297,786 | PR_kwDODunzps5G23-T | 5,411 | Update docs of S3 filesystem with async aiobotocore | [] | closed | false | null | 2 | 2023-01-06T23:19:17Z | 2023-01-18T11:18:59Z | 2023-01-18T11:12:04Z | null | [s3fs has migrated to all async calls](https://github.com/fsspec/s3fs/commit/0de2c6fb3d87c08ea694de96dca0d0834034f8bf).
Updating documentation to use `AioSession` while using s3fs for download manager as well as working with datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5411/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5411/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5411",
"merged_at": "2023-01-18T11:12:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5411"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008587 / 0.011353 (-0.002766) | 0.004613 / 0.011008 (-0.006395) | 0.100446 / 0.038508 (0.061938) | 0.029606 / 0.023109 (0.006497) | 0.302102 / 0.275898 (0.026204) | 0.357364 / 0.323480 (0.033884) | 0.007031 / 0.007986 (-0.000954) | 0.003593 / 0.004328 (-0.000735) | 0.078110 / 0.004250 (0.073860) | 0.035495 / 0.037052 (-0.001557) | 0.312522 / 0.258489 (0.054033) | 0.349336 / 0.293841 (0.055495) | 0.033719 / 0.128546 (-0.094827) | 0.011449 / 0.075646 (-0.064197) | 0.321760 / 0.419271 (-0.097512) | 0.043697 / 0.043533 (0.000165) | 0.304476 / 0.255139 (0.049337) | 0.333126 / 0.283200 (0.049926) | 0.092756 / 0.141683 (-0.048927) | 1.506734 / 1.452155 (0.054579) | 1.547381 / 1.492716 (0.054664) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178177 / 0.018006 (0.160171) | 0.427814 / 0.000490 (0.427324) | 0.002505 / 0.000200 (0.002305) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023039 / 0.037411 (-0.014372) | 0.097113 / 0.014526 (0.082587) | 0.105014 / 0.176557 (-0.071543) | 0.141185 / 0.737135 (-0.595950) | 0.108843 / 0.296338 (-0.187495) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.424148 / 0.215209 (0.208939) | 4.247599 / 2.077655 (2.169944) | 2.130720 / 1.504120 (0.626600) | 1.916349 / 1.541195 (0.375154) | 1.831515 / 1.468490 (0.363025) | 0.688301 / 4.584777 (-3.896476) | 3.381749 / 3.745712 (-0.363963) | 2.900045 / 5.269862 (-2.369817) | 1.576248 / 4.565676 (-2.989428) | 0.082354 / 0.424275 (-0.341921) | 0.012200 / 0.007607 (0.004593) | 0.525753 / 0.226044 (0.299709) | 5.277672 / 2.268929 (3.008743) | 2.603870 / 55.444624 (-52.840754) | 2.296203 / 6.876477 (-4.580273) | 2.308014 / 2.142072 (0.165942) | 0.809056 / 4.805227 (-3.996171) | 0.148122 / 6.500664 (-6.352542) | 0.066097 / 0.075469 (-0.009372) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.214059 / 1.841788 (-0.627728) | 13.671332 / 8.074308 (5.597024) | 13.694554 / 10.191392 (3.503162) | 0.151454 / 0.680424 (-0.528970) | 0.028514 / 0.534201 (-0.505687) | 0.391480 / 0.579283 (-0.187804) | 0.404499 / 0.434364 (-0.029865) | 0.458111 / 0.540337 (-0.082226) | 0.539454 / 1.386936 (-0.847482) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006795 / 0.011353 (-0.004558) | 0.004463 / 0.011008 (-0.006545) | 0.099542 / 0.038508 (0.061034) | 0.027588 / 0.023109 (0.004479) | 0.423023 / 0.275898 (0.147125) | 0.458459 / 0.323480 (0.134979) | 0.004981 / 0.007986 (-0.003005) | 0.003321 / 0.004328 (-0.001008) | 0.075727 / 0.004250 (0.071477) | 0.040541 / 0.037052 (0.003489) | 0.423724 / 0.258489 (0.165235) | 0.468334 / 0.293841 (0.174493) | 0.031732 / 0.128546 (-0.096814) | 0.011478 / 0.075646 (-0.064168) | 0.319807 / 0.419271 (-0.099465) | 0.041215 / 0.043533 (-0.002318) | 0.423060 / 0.255139 (0.167921) | 0.446157 / 0.283200 (0.162957) | 0.088884 / 0.141683 (-0.052799) | 1.553404 / 1.452155 (0.101250) | 1.607797 / 1.492716 (0.115080) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208314 / 0.018006 (0.190307) | 0.411627 / 0.000490 (0.411137) | 0.002416 / 0.000200 (0.002216) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024641 / 0.037411 (-0.012770) | 0.101047 / 0.014526 (0.086521) | 0.108410 / 0.176557 (-0.068147) | 0.142860 / 0.737135 (-0.594276) | 0.112486 / 0.296338 (-0.183852) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485520 / 0.215209 (0.270311) | 4.864009 / 2.077655 (2.786355) | 2.541865 / 1.504120 (1.037745) | 2.339569 / 1.541195 (0.798374) | 2.378258 / 1.468490 (0.909768) | 0.698000 / 4.584777 (-3.886777) | 3.343137 / 3.745712 (-0.402575) | 1.842264 / 5.269862 (-3.427597) | 1.154707 / 4.565676 (-3.410969) | 0.082826 / 0.424275 (-0.341449) | 0.012379 / 0.007607 (0.004772) | 0.583335 / 0.226044 (0.357291) | 5.885934 / 2.268929 (3.617006) | 2.997769 / 55.444624 (-52.446856) | 2.653681 / 6.876477 (-4.222796) | 2.761656 / 2.142072 (0.619583) | 0.799883 / 4.805227 (-4.005344) | 0.151398 / 6.500664 (-6.349266) | 0.067445 / 0.075469 (-0.008024) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292009 / 1.841788 (-0.549779) | 13.976180 / 8.074308 (5.901872) | 14.219469 / 10.191392 (4.028077) | 0.127810 / 0.680424 (-0.552614) | 0.016919 / 0.534201 (-0.517282) | 0.376401 / 0.579283 (-0.202882) | 0.388563 / 0.434364 (-0.045801) | 0.444904 / 0.540337 (-0.095433) | 0.532290 / 1.386936 (-0.854646) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/776/comments | https://api.github.com/repos/huggingface/datasets/issues/776/events | https://github.com/huggingface/datasets/pull/776 | 732,343,550 | MDExOlB1bGxSZXF1ZXN0NTEyMjk5NzQx | 776 | Allow custom split names in text dataset | [] | closed | false | null | 1 | 2020-10-29T14:04:06Z | 2020-10-30T13:46:45Z | 2020-10-30T13:23:52Z | null | The `text` dataset used to return only splits like train, test and validation. Other splits were ignored.
Now any split name is allowed.
I did the same for `json`, `pandas` and `csv`
Fix #735 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/776/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/776/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/776.diff",
"html_url": "https://github.com/huggingface/datasets/pull/776",
"merged_at": "2020-10-30T13:23:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/776.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/776"
} | true | [
"Awesome! This will make the behaviour much more intuitive for some non-standard code.\r\n\r\nThanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5242/comments | https://api.github.com/repos/huggingface/datasets/issues/5242/events | https://github.com/huggingface/datasets/issues/5242 | 1,449,069,382 | I_kwDODunzps5WXwtG | 5,242 | Failed Data Processing upon upload with zip file full of images | [] | open | false | null | 1 | 2022-11-15T02:47:52Z | 2022-11-15T17:59:23Z | null | null | I went to autotrain and under image classification arrived where it was time to prepare my dataset. Screenshot below

I chose the method 2 option. I have a csv file with two columns. ~23,000 files.
I uploaded this and chose the image_relpath, and target columns.
The image uploader said that I could only upload 10,000 singular images at a time so the 2nd option was to zip the images up and upload a zip archive which I did.
That all uploaded.
Now I have the message below. It appears the zip archive does just uncompress on the Hugging Face end?
What am I missing here?

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5242/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5242/timeline | null | null | null | null | false | [
"cc @abhishekkrthakur @SBrandeis "
] |
https://api.github.com/repos/huggingface/datasets/issues/5100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5100/comments | https://api.github.com/repos/huggingface/datasets/issues/5100/events | https://github.com/huggingface/datasets/issues/5100 | 1,404,458,586 | I_kwDODunzps5TtlZa | 5,100 | datasets[s3] sagemaker can't run a model - datasets issue with Value and ClassLabel and cast() method | [] | closed | false | null | 0 | 2022-10-11T11:16:31Z | 2022-10-11T13:48:26Z | 2022-10-11T13:48:26Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5100/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5100/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/599 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/599/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/599/comments | https://api.github.com/repos/huggingface/datasets/issues/599/events | https://github.com/huggingface/datasets/pull/599 | 697,377,786 | MDExOlB1bGxSZXF1ZXN0NDgzMzI3ODQ5 | 599 | Add MATINF dataset | [] | closed | false | null | 2 | 2020-09-10T03:31:09Z | 2020-09-17T12:17:25Z | 2020-09-17T12:17:25Z | null | @lhoestq The command to create metadata failed. I guess it's because the zip is not downloaded from a remote address? How to solve that? Also the CI fails and I don't know how to fix that :( | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/599/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/599/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/599.diff",
"html_url": "https://github.com/huggingface/datasets/pull/599",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/599.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/599"
} | true | [
"Hi ! sorry for the late response\r\n\r\nCould you try to rebase from master ? We changed the named of the library last week so you have to include this change in your code.\r\n\r\nCan you give me more details about the error you get when running the cli command ?\r\n\r\nNote that in case of a manual download you have to specify the directory where you downloaded the data with `--data_dir <path/to/the/directory>`",
"I fucked up the Git rebase lol. Closing it."
] |
https://api.github.com/repos/huggingface/datasets/issues/4563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4563/comments | https://api.github.com/repos/huggingface/datasets/issues/4563/events | https://github.com/huggingface/datasets/pull/4563 | 1,283,914,383 | PR_kwDODunzps46UmZQ | 4,563 | Support streaming allocine dataset | [] | closed | false | null | 1 | 2022-06-24T15:55:03Z | 2022-06-24T16:54:57Z | 2022-06-24T16:44:41Z | null | Support streaming allocine dataset.
Fix #4562. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4563/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4563/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4563.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4563",
"merged_at": "2022-06-24T16:44:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4563.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4563"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5821/comments | https://api.github.com/repos/huggingface/datasets/issues/5821/events | https://github.com/huggingface/datasets/pull/5821 | 1,696,400,343 | PR_kwDODunzps5PzHLU | 5,821 | IterableDataset Arrow formatting | [] | closed | false | null | 13 | 2023-05-04T17:23:43Z | 2023-05-31T09:43:26Z | 2023-05-31T09:36:18Z | null | Adding an optional `.iter_arrow` to examples iterable. This allows to use Arrow formatting in map/filter.
This will also be useful for torch formatting, since we can reuse the TorchFormatter that converts Arrow data to torch tensors
Related to https://github.com/huggingface/datasets/issues/5793 and https://github.com/huggingface/datasets/issues/3444
Required for https://github.com/huggingface/datasets/pull/5852
### Example:
Speed x10 in map
```python
from datasets import Dataset
import pyarrow.compute as pc
import time
ds = Dataset.from_dict({"a": range(100_000)})
ids = ds.to_iterable_dataset()
ids = ids.map(lambda x: {"a": [a + 10 for a in x["a"]]}, batched=True)
_start = time.time()
print(f"Python ({sum(1 for _ in ids)} items):\t{(time.time() - _start) * 1000:.1f}ms")
# Python (100000 items): 695.7ms
ids = ds.to_iterable_dataset().with_format("arrow")
ids = ids.map(lambda t: t.set_column(0, "a", pc.add(t[0], 10)), batched=True)
ids = ids.with_format(None)
_start = time.time()
print(f"Arrow ({sum(1 for _ in ids)} items):\t{(time.time() - _start) * 1000:.1f}ms)")
# Arrow (100000 items): 81.0ms)
```
### Implementation details
I added an optional `iter_arrow` method to examples iterable. If an example iterable has this method, then it can be used to iterate on the examples by batch of arrow tables. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5821/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5821/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5821.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5821",
"merged_at": "2023-05-31T09:36:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5821.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5821"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007593 / 0.011353 (-0.003760) | 0.005554 / 0.011008 (-0.005454) | 0.097663 / 0.038508 (0.059155) | 0.034915 / 0.023109 (0.011806) | 0.303116 / 0.275898 (0.027218) | 0.342376 / 0.323480 (0.018897) | 0.006044 / 0.007986 (-0.001942) | 0.004239 / 0.004328 (-0.000090) | 0.074561 / 0.004250 (0.070310) | 0.049109 / 0.037052 (0.012057) | 0.311302 / 0.258489 (0.052813) | 0.360717 / 0.293841 (0.066876) | 0.035119 / 0.128546 (-0.093428) | 0.012465 / 0.075646 (-0.063181) | 0.333648 / 0.419271 (-0.085624) | 0.051294 / 0.043533 (0.007762) | 0.297298 / 0.255139 (0.042159) | 0.321957 / 0.283200 (0.038757) | 0.108206 / 0.141683 (-0.033477) | 1.425023 / 1.452155 (-0.027132) | 1.526395 / 1.492716 (0.033678) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300694 / 0.018006 (0.282688) | 0.515141 / 0.000490 (0.514651) | 0.003965 / 0.000200 (0.003765) | 0.000260 / 0.000054 (0.000206) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029428 / 0.037411 (-0.007983) | 0.107634 / 0.014526 (0.093108) | 0.123662 / 0.176557 (-0.052895) | 0.182886 / 0.737135 (-0.554249) | 0.128361 / 0.296338 (-0.167977) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.398809 / 0.215209 (0.183600) | 3.984428 / 2.077655 (1.906773) | 1.795337 / 1.504120 (0.291217) | 1.609235 / 1.541195 (0.068040) | 1.724825 / 1.468490 (0.256335) | 0.698413 / 4.584777 (-3.886364) | 3.857479 / 3.745712 (0.111767) | 2.135203 / 5.269862 (-3.134659) | 1.348458 / 4.565676 (-3.217218) | 0.086445 / 0.424275 (-0.337830) | 0.012717 / 0.007607 (0.005110) | 0.498713 / 0.226044 (0.272668) | 4.988685 / 2.268929 (2.719757) | 2.284764 / 55.444624 (-53.159860) | 1.961162 / 6.876477 (-4.915315) | 2.147514 / 2.142072 (0.005441) | 0.850334 / 4.805227 (-3.954894) | 0.171664 / 6.500664 (-6.329000) | 0.065526 / 0.075469 (-0.009943) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.204398 / 1.841788 (-0.637390) | 15.625790 / 8.074308 (7.551482) | 14.614980 / 10.191392 (4.423588) | 0.167135 / 0.680424 (-0.513289) | 0.017631 / 0.534201 (-0.516570) | 0.427337 / 0.579283 (-0.151946) | 0.439203 / 0.434364 (0.004839) | 0.499670 / 0.540337 (-0.040668) | 0.587577 / 1.386936 (-0.799359) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007866 / 0.011353 (-0.003486) | 0.005798 / 0.011008 (-0.005210) | 0.075803 / 0.038508 (0.037295) | 0.035773 / 0.023109 (0.012664) | 0.361965 / 0.275898 (0.086067) | 0.402780 / 0.323480 (0.079300) | 0.006521 / 0.007986 (-0.001465) | 0.004613 / 0.004328 (0.000284) | 0.075196 / 0.004250 (0.070946) | 0.055324 / 0.037052 (0.018272) | 0.363468 / 0.258489 (0.104979) | 0.410344 / 0.293841 (0.116503) | 0.036324 / 0.128546 (-0.092222) | 0.012891 / 0.075646 (-0.062755) | 0.086991 / 0.419271 (-0.332280) | 0.048082 / 0.043533 (0.004549) | 0.357238 / 0.255139 (0.102099) | 0.377065 / 0.283200 (0.093865) | 0.118586 / 0.141683 (-0.023097) | 1.463161 / 1.452155 (0.011007) | 1.582686 / 1.492716 (0.089969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.267916 / 0.018006 (0.249909) | 0.540862 / 0.000490 (0.540373) | 0.003148 / 0.000200 (0.002948) | 0.000101 / 0.000054 (0.000047) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032290 / 0.037411 (-0.005122) | 0.115468 / 0.014526 (0.100943) | 0.125743 / 0.176557 (-0.050814) | 0.177469 / 0.737135 (-0.559667) | 0.133579 / 0.296338 (-0.162759) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446727 / 0.215209 (0.231518) | 4.467938 / 2.077655 (2.390284) | 2.330171 / 1.504120 (0.826052) | 2.165624 / 1.541195 (0.624429) | 2.298063 / 1.468490 (0.829573) | 0.702241 / 4.584777 (-3.882536) | 3.845302 / 3.745712 (0.099590) | 2.169278 / 5.269862 (-3.100584) | 1.401392 / 4.565676 (-3.164285) | 0.086672 / 0.424275 (-0.337603) | 0.012355 / 0.007607 (0.004748) | 0.543639 / 0.226044 (0.317595) | 5.425876 / 2.268929 (3.156947) | 2.781794 / 55.444624 (-52.662831) | 2.503724 / 6.876477 (-4.372752) | 2.622580 / 2.142072 (0.480507) | 0.847143 / 4.805227 (-3.958084) | 0.171721 / 6.500664 (-6.328943) | 0.067894 / 0.075469 (-0.007575) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292194 / 1.841788 (-0.549594) | 15.497311 / 8.074308 (7.423003) | 15.002463 / 10.191392 (4.811071) | 0.152244 / 0.680424 (-0.528180) | 0.018085 / 0.534201 (-0.516116) | 0.445787 / 0.579283 (-0.133496) | 0.448960 / 0.434364 (0.014596) | 0.515319 / 0.540337 (-0.025019) | 0.623840 / 1.386936 (-0.763096) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006938 / 0.011353 (-0.004415) | 0.005100 / 0.011008 (-0.005909) | 0.096525 / 0.038508 (0.058017) | 0.033764 / 0.023109 (0.010655) | 0.301107 / 0.275898 (0.025209) | 0.333140 / 0.323480 (0.009660) | 0.005719 / 0.007986 (-0.002266) | 0.005192 / 0.004328 (0.000864) | 0.073685 / 0.004250 (0.069434) | 0.048149 / 0.037052 (0.011096) | 0.299244 / 0.258489 (0.040754) | 0.347518 / 0.293841 (0.053677) | 0.034810 / 0.128546 (-0.093736) | 0.012284 / 0.075646 (-0.063363) | 0.333600 / 0.419271 (-0.085672) | 0.050750 / 0.043533 (0.007217) | 0.299782 / 0.255139 (0.044643) | 0.322712 / 0.283200 (0.039512) | 0.105659 / 0.141683 (-0.036024) | 1.457536 / 1.452155 (0.005381) | 1.571604 / 1.492716 (0.078887) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207190 / 0.018006 (0.189184) | 0.439230 / 0.000490 (0.438740) | 0.006403 / 0.000200 (0.006203) | 0.000282 / 0.000054 (0.000228) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027424 / 0.037411 (-0.009987) | 0.107180 / 0.014526 (0.092655) | 0.118356 / 0.176557 (-0.058201) | 0.175557 / 0.737135 (-0.561579) | 0.125671 / 0.296338 (-0.170668) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411249 / 0.215209 (0.196039) | 4.094494 / 2.077655 (2.016839) | 1.946843 / 1.504120 (0.442723) | 1.766503 / 1.541195 (0.225308) | 1.831406 / 1.468490 (0.362916) | 0.704637 / 4.584777 (-3.880140) | 3.819204 / 3.745712 (0.073492) | 3.412598 / 5.269862 (-1.857263) | 1.796385 / 4.565676 (-2.769291) | 0.084591 / 0.424275 (-0.339684) | 0.012568 / 0.007607 (0.004961) | 0.506372 / 0.226044 (0.280327) | 5.049461 / 2.268929 (2.780532) | 2.409860 / 55.444624 (-53.034765) | 2.064514 / 6.876477 (-4.811963) | 2.192808 / 2.142072 (0.050735) | 0.833773 / 4.805227 (-3.971455) | 0.167948 / 6.500664 (-6.332716) | 0.064617 / 0.075469 (-0.010852) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.174739 / 1.841788 (-0.667048) | 14.605634 / 8.074308 (6.531326) | 14.321043 / 10.191392 (4.129651) | 0.145892 / 0.680424 (-0.534532) | 0.017413 / 0.534201 (-0.516788) | 0.444940 / 0.579283 (-0.134343) | 0.430792 / 0.434364 (-0.003572) | 0.539699 / 0.540337 (-0.000638) | 0.640279 / 1.386936 (-0.746657) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007159 / 0.011353 (-0.004194) | 0.005313 / 0.011008 (-0.005695) | 0.073630 / 0.038508 (0.035122) | 0.033459 / 0.023109 (0.010350) | 0.356959 / 0.275898 (0.081061) | 0.385918 / 0.323480 (0.062438) | 0.005714 / 0.007986 (-0.002272) | 0.004074 / 0.004328 (-0.000254) | 0.073278 / 0.004250 (0.069028) | 0.047193 / 0.037052 (0.010140) | 0.360300 / 0.258489 (0.101811) | 0.398052 / 0.293841 (0.104212) | 0.035670 / 0.128546 (-0.092876) | 0.012499 / 0.075646 (-0.063147) | 0.086677 / 0.419271 (-0.332595) | 0.046534 / 0.043533 (0.003001) | 0.370029 / 0.255139 (0.114890) | 0.376040 / 0.283200 (0.092841) | 0.105184 / 0.141683 (-0.036499) | 1.419779 / 1.452155 (-0.032375) | 1.538925 / 1.492716 (0.046209) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220465 / 0.018006 (0.202459) | 0.438836 / 0.000490 (0.438346) | 0.000428 / 0.000200 (0.000228) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029114 / 0.037411 (-0.008298) | 0.111871 / 0.014526 (0.097345) | 0.124367 / 0.176557 (-0.052189) | 0.173737 / 0.737135 (-0.563398) | 0.128435 / 0.296338 (-0.167904) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440706 / 0.215209 (0.225497) | 4.414826 / 2.077655 (2.337171) | 2.128899 / 1.504120 (0.624780) | 1.929551 / 1.541195 (0.388357) | 2.013130 / 1.468490 (0.544640) | 0.708566 / 4.584777 (-3.876211) | 3.846459 / 3.745712 (0.100747) | 2.158829 / 5.269862 (-3.111032) | 1.339454 / 4.565676 (-3.226223) | 0.086345 / 0.424275 (-0.337930) | 0.012085 / 0.007607 (0.004478) | 0.546360 / 0.226044 (0.320316) | 5.461612 / 2.268929 (3.192683) | 2.657388 / 55.444624 (-52.787237) | 2.298403 / 6.876477 (-4.578074) | 2.344572 / 2.142072 (0.202499) | 0.844276 / 4.805227 (-3.960951) | 0.170225 / 6.500664 (-6.330439) | 0.064684 / 0.075469 (-0.010785) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265114 / 1.841788 (-0.576674) | 15.058156 / 8.074308 (6.983848) | 14.485182 / 10.191392 (4.293790) | 0.165960 / 0.680424 (-0.514464) | 0.017481 / 0.534201 (-0.516719) | 0.425141 / 0.579283 (-0.154142) | 0.434883 / 0.434364 (0.000519) | 0.506701 / 0.540337 (-0.033637) | 0.613240 / 1.386936 (-0.773697) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007651 / 0.011353 (-0.003702) | 0.005503 / 0.011008 (-0.005505) | 0.098751 / 0.038508 (0.060243) | 0.036822 / 0.023109 (0.013713) | 0.340754 / 0.275898 (0.064856) | 0.387247 / 0.323480 (0.063767) | 0.006513 / 0.007986 (-0.001473) | 0.006135 / 0.004328 (0.001807) | 0.073656 / 0.004250 (0.069406) | 0.055508 / 0.037052 (0.018456) | 0.352493 / 0.258489 (0.094004) | 0.408003 / 0.293841 (0.114162) | 0.036346 / 0.128546 (-0.092201) | 0.012562 / 0.075646 (-0.063085) | 0.335111 / 0.419271 (-0.084160) | 0.051928 / 0.043533 (0.008395) | 0.339405 / 0.255139 (0.084266) | 0.366840 / 0.283200 (0.083640) | 0.114353 / 0.141683 (-0.027330) | 1.449062 / 1.452155 (-0.003092) | 1.567310 / 1.492716 (0.074594) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262975 / 0.018006 (0.244968) | 0.570302 / 0.000490 (0.569813) | 0.003419 / 0.000200 (0.003219) | 0.000100 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027363 / 0.037411 (-0.010049) | 0.109033 / 0.014526 (0.094507) | 0.119048 / 0.176557 (-0.057509) | 0.175891 / 0.737135 (-0.561244) | 0.124577 / 0.296338 (-0.171762) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.397988 / 0.215209 (0.182779) | 3.993210 / 2.077655 (1.915555) | 1.809275 / 1.504120 (0.305155) | 1.614664 / 1.541195 (0.073469) | 1.723650 / 1.468490 (0.255159) | 0.698484 / 4.584777 (-3.886293) | 3.914135 / 3.745712 (0.168423) | 2.142622 / 5.269862 (-3.127239) | 1.360215 / 4.565676 (-3.205461) | 0.086340 / 0.424275 (-0.337935) | 0.012836 / 0.007607 (0.005229) | 0.500728 / 0.226044 (0.274684) | 5.006744 / 2.268929 (2.737815) | 2.350668 / 55.444624 (-53.093956) | 1.979816 / 6.876477 (-4.896660) | 2.190159 / 2.142072 (0.048087) | 0.854063 / 4.805227 (-3.951164) | 0.170203 / 6.500664 (-6.330461) | 0.066903 / 0.075469 (-0.008566) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.184012 / 1.841788 (-0.657775) | 15.407350 / 8.074308 (7.333042) | 14.758180 / 10.191392 (4.566788) | 0.169280 / 0.680424 (-0.511144) | 0.017419 / 0.534201 (-0.516781) | 0.434359 / 0.579283 (-0.144925) | 0.442515 / 0.434364 (0.008151) | 0.503132 / 0.540337 (-0.037205) | 0.602589 / 1.386936 (-0.784347) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008022 / 0.011353 (-0.003331) | 0.005473 / 0.011008 (-0.005535) | 0.076106 / 0.038508 (0.037598) | 0.037065 / 0.023109 (0.013956) | 0.380039 / 0.275898 (0.104141) | 0.394205 / 0.323480 (0.070725) | 0.006447 / 0.007986 (-0.001539) | 0.006011 / 0.004328 (0.001682) | 0.075236 / 0.004250 (0.070985) | 0.054425 / 0.037052 (0.017372) | 0.381707 / 0.258489 (0.123218) | 0.411237 / 0.293841 (0.117396) | 0.037222 / 0.128546 (-0.091324) | 0.012627 / 0.075646 (-0.063020) | 0.086733 / 0.419271 (-0.332538) | 0.053857 / 0.043533 (0.010324) | 0.373374 / 0.255139 (0.118235) | 0.381680 / 0.283200 (0.098480) | 0.121962 / 0.141683 (-0.019721) | 1.430804 / 1.452155 (-0.021351) | 1.562517 / 1.492716 (0.069801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262034 / 0.018006 (0.244028) | 0.563497 / 0.000490 (0.563007) | 0.002726 / 0.000200 (0.002526) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031071 / 0.037411 (-0.006341) | 0.111983 / 0.014526 (0.097457) | 0.126634 / 0.176557 (-0.049923) | 0.177511 / 0.737135 (-0.559625) | 0.132599 / 0.296338 (-0.163739) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436148 / 0.215209 (0.220939) | 4.344850 / 2.077655 (2.267195) | 2.105877 / 1.504120 (0.601757) | 1.920934 / 1.541195 (0.379739) | 2.072930 / 1.468490 (0.604440) | 0.701793 / 4.584777 (-3.882984) | 3.841621 / 3.745712 (0.095909) | 3.602550 / 5.269862 (-1.667311) | 1.775999 / 4.565676 (-2.789677) | 0.086024 / 0.424275 (-0.338251) | 0.012275 / 0.007607 (0.004668) | 0.532815 / 0.226044 (0.306770) | 5.336273 / 2.268929 (3.067344) | 2.638842 / 55.444624 (-52.805782) | 2.301842 / 6.876477 (-4.574635) | 2.407448 / 2.142072 (0.265376) | 0.855836 / 4.805227 (-3.949392) | 0.170348 / 6.500664 (-6.330317) | 0.066926 / 0.075469 (-0.008543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291515 / 1.841788 (-0.550272) | 15.869825 / 8.074308 (7.795517) | 15.068227 / 10.191392 (4.876835) | 0.156953 / 0.680424 (-0.523471) | 0.017761 / 0.534201 (-0.516440) | 0.429515 / 0.579283 (-0.149768) | 0.432758 / 0.434364 (-0.001605) | 0.500080 / 0.540337 (-0.040258) | 0.601451 / 1.386936 (-0.785485) |\n\n</details>\n</details>\n\n\n",
"Will need to take https://github.com/huggingface/datasets/pull/5810 into account if it gets merged before this one",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006914 / 0.011353 (-0.004439) | 0.004727 / 0.011008 (-0.006281) | 0.098880 / 0.038508 (0.060372) | 0.036663 / 0.023109 (0.013554) | 0.317575 / 0.275898 (0.041677) | 0.360301 / 0.323480 (0.036821) | 0.006084 / 0.007986 (-0.001901) | 0.004118 / 0.004328 (-0.000210) | 0.074330 / 0.004250 (0.070079) | 0.042422 / 0.037052 (0.005369) | 0.335625 / 0.258489 (0.077136) | 0.366616 / 0.293841 (0.072775) | 0.028523 / 0.128546 (-0.100023) | 0.008883 / 0.075646 (-0.066763) | 0.332475 / 0.419271 (-0.086797) | 0.051746 / 0.043533 (0.008214) | 0.324952 / 0.255139 (0.069813) | 0.339660 / 0.283200 (0.056460) | 0.103714 / 0.141683 (-0.037969) | 1.472130 / 1.452155 (0.019976) | 1.516548 / 1.492716 (0.023831) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229538 / 0.018006 (0.211532) | 0.449077 / 0.000490 (0.448588) | 0.003707 / 0.000200 (0.003507) | 0.000086 / 0.000054 (0.000032) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027897 / 0.037411 (-0.009514) | 0.115452 / 0.014526 (0.100926) | 0.118830 / 0.176557 (-0.057726) | 0.176228 / 0.737135 (-0.560907) | 0.125966 / 0.296338 (-0.170372) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436947 / 0.215209 (0.221738) | 4.355687 / 2.077655 (2.278033) | 2.195857 / 1.504120 (0.691737) | 2.028133 / 1.541195 (0.486938) | 2.119872 / 1.468490 (0.651382) | 0.524256 / 4.584777 (-4.060521) | 3.864064 / 3.745712 (0.118352) | 3.446181 / 5.269862 (-1.823680) | 1.610307 / 4.565676 (-2.955370) | 0.065981 / 0.424275 (-0.358294) | 0.012172 / 0.007607 (0.004565) | 0.545341 / 0.226044 (0.319297) | 5.451728 / 2.268929 (3.182800) | 2.690734 / 55.444624 (-52.753890) | 2.368203 / 6.876477 (-4.508274) | 2.549533 / 2.142072 (0.407460) | 0.651296 / 4.805227 (-4.153931) | 0.143697 / 6.500664 (-6.356968) | 0.065170 / 0.075469 (-0.010299) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.198898 / 1.841788 (-0.642890) | 15.349348 / 8.074308 (7.275040) | 15.314467 / 10.191392 (5.123075) | 0.177219 / 0.680424 (-0.503205) | 0.018223 / 0.534201 (-0.515978) | 0.396209 / 0.579283 (-0.183074) | 0.427810 / 0.434364 (-0.006554) | 0.475107 / 0.540337 (-0.065230) | 0.561224 / 1.386936 (-0.825712) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007024 / 0.011353 (-0.004329) | 0.004851 / 0.011008 (-0.006157) | 0.075031 / 0.038508 (0.036523) | 0.036411 / 0.023109 (0.013302) | 0.375999 / 0.275898 (0.100101) | 0.433033 / 0.323480 (0.109553) | 0.006089 / 0.007986 (-0.001897) | 0.005638 / 0.004328 (0.001309) | 0.072599 / 0.004250 (0.068348) | 0.048489 / 0.037052 (0.011436) | 0.381807 / 0.258489 (0.123318) | 0.441531 / 0.293841 (0.147691) | 0.029044 / 0.128546 (-0.099503) | 0.009052 / 0.075646 (-0.066595) | 0.080086 / 0.419271 (-0.339186) | 0.046919 / 0.043533 (0.003386) | 0.360399 / 0.255139 (0.105260) | 0.405445 / 0.283200 (0.122245) | 0.108815 / 0.141683 (-0.032868) | 1.415168 / 1.452155 (-0.036987) | 1.511756 / 1.492716 (0.019040) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210287 / 0.018006 (0.192281) | 0.445139 / 0.000490 (0.444650) | 0.000386 / 0.000200 (0.000186) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030457 / 0.037411 (-0.006954) | 0.117225 / 0.014526 (0.102699) | 0.122833 / 0.176557 (-0.053724) | 0.170441 / 0.737135 (-0.566694) | 0.131589 / 0.296338 (-0.164750) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446541 / 0.215209 (0.231332) | 4.471214 / 2.077655 (2.393560) | 2.145894 / 1.504120 (0.641774) | 1.958113 / 1.541195 (0.416919) | 2.069623 / 1.468490 (0.601132) | 0.527562 / 4.584777 (-4.057215) | 3.838285 / 3.745712 (0.092573) | 1.884780 / 5.269862 (-3.385081) | 1.088124 / 4.565676 (-3.477553) | 0.066099 / 0.424275 (-0.358176) | 0.011973 / 0.007607 (0.004366) | 0.540369 / 0.226044 (0.314325) | 5.403554 / 2.268929 (3.134626) | 2.749920 / 55.444624 (-52.694704) | 2.543169 / 6.876477 (-4.333308) | 2.403116 / 2.142072 (0.261043) | 0.638723 / 4.805227 (-4.166505) | 0.142232 / 6.500664 (-6.358432) | 0.065551 / 0.075469 (-0.009918) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.298307 / 1.841788 (-0.543481) | 15.986177 / 8.074308 (7.911869) | 15.530453 / 10.191392 (5.339061) | 0.160138 / 0.680424 (-0.520286) | 0.017988 / 0.534201 (-0.516213) | 0.397857 / 0.579283 (-0.181427) | 0.435071 / 0.434364 (0.000707) | 0.480096 / 0.540337 (-0.060241) | 0.589139 / 1.386936 (-0.797797) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006976 / 0.011353 (-0.004377) | 0.005068 / 0.011008 (-0.005940) | 0.098178 / 0.038508 (0.059670) | 0.035167 / 0.023109 (0.012057) | 0.324093 / 0.275898 (0.048195) | 0.350749 / 0.323480 (0.027269) | 0.006128 / 0.007986 (-0.001858) | 0.004361 / 0.004328 (0.000033) | 0.075412 / 0.004250 (0.071161) | 0.052083 / 0.037052 (0.015031) | 0.326726 / 0.258489 (0.068237) | 0.371450 / 0.293841 (0.077609) | 0.028522 / 0.128546 (-0.100025) | 0.009210 / 0.075646 (-0.066436) | 0.329296 / 0.419271 (-0.089976) | 0.051182 / 0.043533 (0.007649) | 0.319863 / 0.255139 (0.064724) | 0.329140 / 0.283200 (0.045941) | 0.111653 / 0.141683 (-0.030030) | 1.464205 / 1.452155 (0.012050) | 1.555779 / 1.492716 (0.063062) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.282372 / 0.018006 (0.264366) | 0.569227 / 0.000490 (0.568737) | 0.005289 / 0.000200 (0.005089) | 0.000095 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029875 / 0.037411 (-0.007537) | 0.111889 / 0.014526 (0.097364) | 0.125678 / 0.176557 (-0.050878) | 0.184695 / 0.737135 (-0.552441) | 0.129737 / 0.296338 (-0.166602) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417404 / 0.215209 (0.202195) | 4.172367 / 2.077655 (2.094712) | 2.008088 / 1.504120 (0.503968) | 1.813182 / 1.541195 (0.271988) | 1.882727 / 1.468490 (0.414237) | 0.525764 / 4.584777 (-4.059013) | 3.815202 / 3.745712 (0.069490) | 1.884197 / 5.269862 (-3.385664) | 1.073779 / 4.565676 (-3.491897) | 0.066125 / 0.424275 (-0.358150) | 0.012473 / 0.007607 (0.004866) | 0.522197 / 0.226044 (0.296153) | 5.218486 / 2.268929 (2.949557) | 2.413846 / 55.444624 (-53.030779) | 2.093298 / 6.876477 (-4.783179) | 2.320583 / 2.142072 (0.178511) | 0.648832 / 4.805227 (-4.156395) | 0.146168 / 6.500664 (-6.354496) | 0.065869 / 0.075469 (-0.009600) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.181859 / 1.841788 (-0.659929) | 15.369517 / 8.074308 (7.295209) | 14.896270 / 10.191392 (4.704878) | 0.146793 / 0.680424 (-0.533630) | 0.017960 / 0.534201 (-0.516241) | 0.421801 / 0.579283 (-0.157482) | 0.438357 / 0.434364 (0.003993) | 0.524554 / 0.540337 (-0.015783) | 0.621041 / 1.386936 (-0.765895) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007104 / 0.011353 (-0.004249) | 0.004895 / 0.011008 (-0.006113) | 0.075641 / 0.038508 (0.037133) | 0.034821 / 0.023109 (0.011712) | 0.363875 / 0.275898 (0.087977) | 0.403042 / 0.323480 (0.079562) | 0.006747 / 0.007986 (-0.001238) | 0.005793 / 0.004328 (0.001465) | 0.074709 / 0.004250 (0.070458) | 0.058801 / 0.037052 (0.021749) | 0.366900 / 0.258489 (0.108411) | 0.414442 / 0.293841 (0.120601) | 0.029099 / 0.128546 (-0.099448) | 0.009394 / 0.075646 (-0.066253) | 0.082612 / 0.419271 (-0.336659) | 0.049076 / 0.043533 (0.005543) | 0.358828 / 0.255139 (0.103689) | 0.378261 / 0.283200 (0.095061) | 0.122147 / 0.141683 (-0.019535) | 1.454155 / 1.452155 (0.002000) | 1.572437 / 1.492716 (0.079720) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.293133 / 0.018006 (0.275127) | 0.536785 / 0.000490 (0.536295) | 0.000457 / 0.000200 (0.000257) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031046 / 0.037411 (-0.006366) | 0.113929 / 0.014526 (0.099403) | 0.126222 / 0.176557 (-0.050335) | 0.173992 / 0.737135 (-0.563143) | 0.129635 / 0.296338 (-0.166704) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441984 / 0.215209 (0.226775) | 4.406002 / 2.077655 (2.328348) | 2.173912 / 1.504120 (0.669792) | 2.000507 / 1.541195 (0.459312) | 2.172766 / 1.468490 (0.704276) | 0.524530 / 4.584777 (-4.060247) | 3.758827 / 3.745712 (0.013115) | 1.886701 / 5.269862 (-3.383160) | 1.073601 / 4.565676 (-3.492075) | 0.066137 / 0.424275 (-0.358139) | 0.011926 / 0.007607 (0.004319) | 0.541103 / 0.226044 (0.315059) | 5.404162 / 2.268929 (3.135233) | 2.634271 / 55.444624 (-52.810354) | 2.366156 / 6.876477 (-4.510321) | 2.566877 / 2.142072 (0.424804) | 0.639088 / 4.805227 (-4.166139) | 0.141810 / 6.500664 (-6.358854) | 0.065446 / 0.075469 (-0.010023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.288173 / 1.841788 (-0.553614) | 15.897051 / 8.074308 (7.822743) | 15.243404 / 10.191392 (5.052012) | 0.162380 / 0.680424 (-0.518043) | 0.017716 / 0.534201 (-0.516485) | 0.396400 / 0.579283 (-0.182883) | 0.420479 / 0.434364 (-0.013885) | 0.476238 / 0.540337 (-0.064099) | 0.583039 / 1.386936 (-0.803897) |\n\n</details>\n</details>\n\n\n",
"I fixed the docstring and type hint",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006310 / 0.011353 (-0.005043) | 0.004297 / 0.011008 (-0.006711) | 0.098288 / 0.038508 (0.059780) | 0.029295 / 0.023109 (0.006185) | 0.386804 / 0.275898 (0.110906) | 0.425717 / 0.323480 (0.102237) | 0.005516 / 0.007986 (-0.002470) | 0.005058 / 0.004328 (0.000730) | 0.074318 / 0.004250 (0.070068) | 0.040609 / 0.037052 (0.003557) | 0.388159 / 0.258489 (0.129670) | 0.428683 / 0.293841 (0.134842) | 0.026207 / 0.128546 (-0.102340) | 0.008655 / 0.075646 (-0.066991) | 0.321601 / 0.419271 (-0.097671) | 0.055329 / 0.043533 (0.011796) | 0.390452 / 0.255139 (0.135313) | 0.409084 / 0.283200 (0.125884) | 0.099555 / 0.141683 (-0.042128) | 1.484289 / 1.452155 (0.032134) | 1.549892 / 1.492716 (0.057176) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219466 / 0.018006 (0.201460) | 0.437288 / 0.000490 (0.436798) | 0.003556 / 0.000200 (0.003356) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023876 / 0.037411 (-0.013535) | 0.100205 / 0.014526 (0.085679) | 0.106365 / 0.176557 (-0.070191) | 0.164353 / 0.737135 (-0.572782) | 0.109987 / 0.296338 (-0.186352) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418819 / 0.215209 (0.203610) | 4.168558 / 2.077655 (2.090903) | 1.862883 / 1.504120 (0.358764) | 1.673308 / 1.541195 (0.132114) | 1.742338 / 1.468490 (0.273848) | 0.550113 / 4.584777 (-4.034664) | 3.492085 / 3.745712 (-0.253627) | 1.734579 / 5.269862 (-3.535283) | 1.006876 / 4.565676 (-3.558801) | 0.068014 / 0.424275 (-0.356261) | 0.012242 / 0.007607 (0.004634) | 0.520633 / 0.226044 (0.294588) | 5.214095 / 2.268929 (2.945167) | 2.319282 / 55.444624 (-53.125343) | 1.979521 / 6.876477 (-4.896956) | 2.099595 / 2.142072 (-0.042477) | 0.659306 / 4.805227 (-4.145921) | 0.135282 / 6.500664 (-6.365382) | 0.067417 / 0.075469 (-0.008052) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.232099 / 1.841788 (-0.609689) | 13.967219 / 8.074308 (5.892910) | 14.347105 / 10.191392 (4.155713) | 0.146360 / 0.680424 (-0.534063) | 0.017021 / 0.534201 (-0.517180) | 0.363254 / 0.579283 (-0.216030) | 0.404391 / 0.434364 (-0.029973) | 0.428670 / 0.540337 (-0.111668) | 0.514942 / 1.386936 (-0.871994) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006360 / 0.011353 (-0.004993) | 0.004160 / 0.011008 (-0.006848) | 0.074856 / 0.038508 (0.036347) | 0.028624 / 0.023109 (0.005515) | 0.355624 / 0.275898 (0.079726) | 0.403678 / 0.323480 (0.080198) | 0.005253 / 0.007986 (-0.002732) | 0.004808 / 0.004328 (0.000480) | 0.074215 / 0.004250 (0.069964) | 0.040641 / 0.037052 (0.003588) | 0.358473 / 0.258489 (0.099984) | 0.414442 / 0.293841 (0.120601) | 0.025595 / 0.128546 (-0.102951) | 0.008506 / 0.075646 (-0.067140) | 0.081547 / 0.419271 (-0.337725) | 0.039719 / 0.043533 (-0.003814) | 0.355420 / 0.255139 (0.100281) | 0.380953 / 0.283200 (0.097753) | 0.100064 / 0.141683 (-0.041618) | 1.459639 / 1.452155 (0.007484) | 1.557288 / 1.492716 (0.064572) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232837 / 0.018006 (0.214831) | 0.424788 / 0.000490 (0.424298) | 0.000397 / 0.000200 (0.000197) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026156 / 0.037411 (-0.011256) | 0.103633 / 0.014526 (0.089107) | 0.109633 / 0.176557 (-0.066923) | 0.159407 / 0.737135 (-0.577728) | 0.113874 / 0.296338 (-0.182465) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.471980 / 0.215209 (0.256771) | 4.724424 / 2.077655 (2.646769) | 2.459950 / 1.504120 (0.955830) | 2.280926 / 1.541195 (0.739731) | 2.368478 / 1.468490 (0.899987) | 0.552809 / 4.584777 (-4.031968) | 3.461985 / 3.745712 (-0.283728) | 1.757060 / 5.269862 (-3.512802) | 1.009599 / 4.565676 (-3.556077) | 0.068407 / 0.424275 (-0.355868) | 0.012341 / 0.007607 (0.004734) | 0.576287 / 0.226044 (0.350242) | 5.767331 / 2.268929 (3.498402) | 2.965743 / 55.444624 (-52.478882) | 2.644935 / 6.876477 (-4.231542) | 2.699663 / 2.142072 (0.557591) | 0.656005 / 4.805227 (-4.149222) | 0.136315 / 6.500664 (-6.364349) | 0.068355 / 0.075469 (-0.007114) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.308301 / 1.841788 (-0.533486) | 14.587268 / 8.074308 (6.512960) | 14.385670 / 10.191392 (4.194278) | 0.148154 / 0.680424 (-0.532270) | 0.016798 / 0.534201 (-0.517402) | 0.360761 / 0.579283 (-0.218523) | 0.392566 / 0.434364 (-0.041798) | 0.431604 / 0.540337 (-0.108734) | 0.528463 / 1.386936 (-0.858473) |\n\n</details>\n</details>\n\n\n",
"let me know if it sounds good for you now @albertvillanova :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008414 / 0.011353 (-0.002939) | 0.005320 / 0.011008 (-0.005688) | 0.115585 / 0.038508 (0.077077) | 0.040815 / 0.023109 (0.017706) | 0.363453 / 0.275898 (0.087555) | 0.385954 / 0.323480 (0.062474) | 0.006463 / 0.007986 (-0.001523) | 0.005571 / 0.004328 (0.001242) | 0.084831 / 0.004250 (0.080581) | 0.050294 / 0.037052 (0.013242) | 0.375684 / 0.258489 (0.117195) | 0.394672 / 0.293841 (0.100831) | 0.033618 / 0.128546 (-0.094928) | 0.010451 / 0.075646 (-0.065195) | 0.388937 / 0.419271 (-0.030334) | 0.059974 / 0.043533 (0.016441) | 0.360437 / 0.255139 (0.105298) | 0.375149 / 0.283200 (0.091950) | 0.118397 / 0.141683 (-0.023286) | 1.726759 / 1.452155 (0.274604) | 1.811928 / 1.492716 (0.319212) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239186 / 0.018006 (0.221180) | 0.483728 / 0.000490 (0.483238) | 0.003285 / 0.000200 (0.003085) | 0.000097 / 0.000054 (0.000043) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030514 / 0.037411 (-0.006898) | 0.127111 / 0.014526 (0.112585) | 0.136185 / 0.176557 (-0.040371) | 0.204541 / 0.737135 (-0.532594) | 0.143228 / 0.296338 (-0.153111) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.465840 / 0.215209 (0.250631) | 4.611160 / 2.077655 (2.533506) | 2.119307 / 1.504120 (0.615187) | 1.882463 / 1.541195 (0.341268) | 1.946067 / 1.468490 (0.477577) | 0.602352 / 4.584777 (-3.982425) | 4.576313 / 3.745712 (0.830601) | 2.112860 / 5.269862 (-3.157001) | 1.224388 / 4.565676 (-3.341289) | 0.073808 / 0.424275 (-0.350467) | 0.013157 / 0.007607 (0.005550) | 0.592208 / 0.226044 (0.366163) | 5.948971 / 2.268929 (3.680042) | 2.690144 / 55.444624 (-52.754480) | 2.236489 / 6.876477 (-4.639987) | 2.423617 / 2.142072 (0.281545) | 0.752053 / 4.805227 (-4.053175) | 0.168185 / 6.500664 (-6.332480) | 0.075454 / 0.075469 (-0.000015) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.407432 / 1.841788 (-0.434356) | 17.054545 / 8.074308 (8.980236) | 15.661362 / 10.191392 (5.469970) | 0.175027 / 0.680424 (-0.505397) | 0.020262 / 0.534201 (-0.513939) | 0.479052 / 0.579283 (-0.100231) | 0.509829 / 0.434364 (0.075465) | 0.601935 / 0.540337 (0.061598) | 0.726754 / 1.386936 (-0.660182) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007698 / 0.011353 (-0.003655) | 0.005267 / 0.011008 (-0.005741) | 0.085832 / 0.038508 (0.047324) | 0.041974 / 0.023109 (0.018865) | 0.418966 / 0.275898 (0.143068) | 0.466314 / 0.323480 (0.142834) | 0.006580 / 0.007986 (-0.001406) | 0.007063 / 0.004328 (0.002735) | 0.087120 / 0.004250 (0.082870) | 0.054908 / 0.037052 (0.017856) | 0.423813 / 0.258489 (0.165323) | 0.489878 / 0.293841 (0.196037) | 0.032823 / 0.128546 (-0.095723) | 0.010471 / 0.075646 (-0.065175) | 0.095839 / 0.419271 (-0.323432) | 0.056421 / 0.043533 (0.012888) | 0.420526 / 0.255139 (0.165387) | 0.447975 / 0.283200 (0.164775) | 0.126604 / 0.141683 (-0.015079) | 1.723097 / 1.452155 (0.270942) | 1.819539 / 1.492716 (0.326822) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.279604 / 0.018006 (0.261598) | 0.496129 / 0.000490 (0.495639) | 0.005419 / 0.000200 (0.005219) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035069 / 0.037411 (-0.002343) | 0.133064 / 0.014526 (0.118538) | 0.145404 / 0.176557 (-0.031152) | 0.205237 / 0.737135 (-0.531898) | 0.150684 / 0.296338 (-0.145654) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513596 / 0.215209 (0.298387) | 5.104861 / 2.077655 (3.027206) | 2.487908 / 1.504120 (0.983788) | 2.271383 / 1.541195 (0.730188) | 2.421043 / 1.468490 (0.952553) | 0.625204 / 4.584777 (-3.959573) | 4.555389 / 3.745712 (0.809677) | 4.181518 / 5.269862 (-1.088344) | 1.676059 / 4.565676 (-2.889617) | 0.078786 / 0.424275 (-0.345489) | 0.014186 / 0.007607 (0.006579) | 0.638360 / 0.226044 (0.412315) | 6.367915 / 2.268929 (4.098986) | 3.095175 / 55.444624 (-52.349449) | 2.706707 / 6.876477 (-4.169769) | 2.735907 / 2.142072 (0.593835) | 0.756323 / 4.805227 (-4.048905) | 0.164783 / 6.500664 (-6.335881) | 0.076291 / 0.075469 (0.000822) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.667058 / 1.841788 (-0.174730) | 18.687459 / 8.074308 (10.613151) | 17.111596 / 10.191392 (6.920204) | 0.167218 / 0.680424 (-0.513206) | 0.020995 / 0.534201 (-0.513206) | 0.463985 / 0.579283 (-0.115298) | 0.502705 / 0.434364 (0.068341) | 0.562877 / 0.540337 (0.022540) | 0.682249 / 1.386936 (-0.704687) |\n\n</details>\n</details>\n\n\n",
"> Maybe we should fix all the tests in test_iterable_dataset.py that contain .with_format(\"torch\")?\r\n\r\nthey're updated in https://github.com/huggingface/datasets/pull/5852",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005931 / 0.011353 (-0.005421) | 0.004004 / 0.011008 (-0.007004) | 0.098632 / 0.038508 (0.060124) | 0.027820 / 0.023109 (0.004711) | 0.302944 / 0.275898 (0.027046) | 0.332684 / 0.323480 (0.009204) | 0.005529 / 0.007986 (-0.002457) | 0.004814 / 0.004328 (0.000485) | 0.074477 / 0.004250 (0.070227) | 0.034875 / 0.037052 (-0.002178) | 0.304542 / 0.258489 (0.046053) | 0.342853 / 0.293841 (0.049012) | 0.025263 / 0.128546 (-0.103283) | 0.008558 / 0.075646 (-0.067089) | 0.322522 / 0.419271 (-0.096750) | 0.043980 / 0.043533 (0.000447) | 0.306618 / 0.255139 (0.051479) | 0.331692 / 0.283200 (0.048492) | 0.087434 / 0.141683 (-0.054248) | 1.464686 / 1.452155 (0.012531) | 1.575038 / 1.492716 (0.082322) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221920 / 0.018006 (0.203914) | 0.417108 / 0.000490 (0.416619) | 0.004625 / 0.000200 (0.004425) | 0.000079 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023493 / 0.037411 (-0.013918) | 0.096684 / 0.014526 (0.082158) | 0.102035 / 0.176557 (-0.074522) | 0.166609 / 0.737135 (-0.570526) | 0.107456 / 0.296338 (-0.188883) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418713 / 0.215209 (0.203504) | 4.156913 / 2.077655 (2.079258) | 1.869064 / 1.504120 (0.364944) | 1.666219 / 1.541195 (0.125024) | 1.676491 / 1.468490 (0.208001) | 0.553843 / 4.584777 (-4.030934) | 3.380471 / 3.745712 (-0.365241) | 2.970370 / 5.269862 (-2.299491) | 1.421597 / 4.565676 (-3.144080) | 0.068019 / 0.424275 (-0.356256) | 0.012995 / 0.007607 (0.005387) | 0.519410 / 0.226044 (0.293365) | 5.198251 / 2.268929 (2.929323) | 2.352969 / 55.444624 (-53.091655) | 2.008981 / 6.876477 (-4.867496) | 2.066519 / 2.142072 (-0.075553) | 0.658982 / 4.805227 (-4.146245) | 0.134341 / 6.500664 (-6.366323) | 0.065893 / 0.075469 (-0.009576) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.207509 / 1.841788 (-0.634279) | 13.863838 / 8.074308 (5.789530) | 13.363359 / 10.191392 (3.171967) | 0.129076 / 0.680424 (-0.551348) | 0.016818 / 0.534201 (-0.517383) | 0.357956 / 0.579283 (-0.221327) | 0.386174 / 0.434364 (-0.048189) | 0.418663 / 0.540337 (-0.121674) | 0.498708 / 1.386936 (-0.888228) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006132 / 0.011353 (-0.005220) | 0.004335 / 0.011008 (-0.006673) | 0.078517 / 0.038508 (0.040009) | 0.027685 / 0.023109 (0.004576) | 0.357956 / 0.275898 (0.082058) | 0.392397 / 0.323480 (0.068918) | 0.005364 / 0.007986 (-0.002622) | 0.004922 / 0.004328 (0.000593) | 0.078061 / 0.004250 (0.073810) | 0.038889 / 0.037052 (0.001837) | 0.360952 / 0.258489 (0.102463) | 0.402790 / 0.293841 (0.108949) | 0.025542 / 0.128546 (-0.103004) | 0.008718 / 0.075646 (-0.066929) | 0.085799 / 0.419271 (-0.333472) | 0.044256 / 0.043533 (0.000723) | 0.358366 / 0.255139 (0.103227) | 0.393500 / 0.283200 (0.110300) | 0.096382 / 0.141683 (-0.045301) | 1.530889 / 1.452155 (0.078735) | 1.621007 / 1.492716 (0.128291) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180572 / 0.018006 (0.162566) | 0.429478 / 0.000490 (0.428988) | 0.002966 / 0.000200 (0.002766) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024530 / 0.037411 (-0.012881) | 0.101401 / 0.014526 (0.086875) | 0.108208 / 0.176557 (-0.068349) | 0.159582 / 0.737135 (-0.577554) | 0.111170 / 0.296338 (-0.185168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.465768 / 0.215209 (0.250559) | 4.706311 / 2.077655 (2.628656) | 2.437756 / 1.504120 (0.933636) | 2.245694 / 1.541195 (0.704499) | 2.282637 / 1.468490 (0.814147) | 0.552752 / 4.584777 (-4.032025) | 3.432992 / 3.745712 (-0.312720) | 1.800054 / 5.269862 (-3.469808) | 1.037852 / 4.565676 (-3.527824) | 0.068240 / 0.424275 (-0.356035) | 0.012433 / 0.007607 (0.004826) | 0.574867 / 0.226044 (0.348822) | 5.707623 / 2.268929 (3.438695) | 2.909746 / 55.444624 (-52.534878) | 2.585423 / 6.876477 (-4.291054) | 2.636801 / 2.142072 (0.494729) | 0.686593 / 4.805227 (-4.118634) | 0.136633 / 6.500664 (-6.364031) | 0.068598 / 0.075469 (-0.006871) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.286628 / 1.841788 (-0.555159) | 14.333258 / 8.074308 (6.258949) | 14.355793 / 10.191392 (4.164401) | 0.133459 / 0.680424 (-0.546965) | 0.017090 / 0.534201 (-0.517111) | 0.358852 / 0.579283 (-0.220431) | 0.399929 / 0.434364 (-0.034435) | 0.422838 / 0.540337 (-0.117500) | 0.515199 / 1.386936 (-0.871737) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3292/comments | https://api.github.com/repos/huggingface/datasets/issues/3292/events | https://github.com/huggingface/datasets/issues/3292 | 1,056,962,554 | I_kwDODunzps4-__f6 | 3,292 | Not able to load 'wikipedia' dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-11-18T05:41:18Z | 2021-11-19T16:49:29Z | 2021-11-19T16:49:29Z | null | ## Describe the bug
I am following the instruction for loading the wikipedia dataset using datasets. However getting the below error.
## Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset("wikipedia")
```
## Expected results
A clear and concise description of the expected results.
## Actual results
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
339 "Config name is missing."
340 "\nPlease pick one among the available configs: %s" % list(self.builder_configs.keys())
--> 341 + "\nExample of usage:\n\t`{}`".format(example_of_usage)
342 )
343 builder_config = self.BUILDER_CONFIGS[0]
ValueError: Config name is missing.
Please pick one among the available configs: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu']
Example of usage:
`load_dataset('wikipedia', '20200501.aa')`
I think the other parameter is missing in the load_dataset function that is not shown in the instruction. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3292/timeline | null | completed | null | null | false | [
"Hi ! Indeed it looks like the code snippet on the Hugging face Hub doesn't show the second parameter\r\n\r\n\r\n\r\nThanks for reporting, I'm taking a look\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5248/comments | https://api.github.com/repos/huggingface/datasets/issues/5248/events | https://github.com/huggingface/datasets/pull/5248 | 1,451,338,676 | PR_kwDODunzps5DAqwt | 5,248 | Complete doc migration | [] | closed | false | null | 2 | 2022-11-16T10:41:04Z | 2022-11-16T15:06:50Z | 2022-11-16T10:41:10Z | null | Reverts huggingface/datasets#5214
Everything is handled on the doc-builder side now 😊 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5248/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5248/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5248.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5248",
"merged_at": "2022-11-16T10:41:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5248.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5248"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5248). All of your documentation changes will be reflected on that endpoint.",
"Thanks for the fix @mishig25.\r\n\r\nI guess this is the reason why the docs are not generated for the latest release version 2.7.0? https://huggingface.co/docs/datasets/index "
] |
https://api.github.com/repos/huggingface/datasets/issues/1972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1972/comments | https://api.github.com/repos/huggingface/datasets/issues/1972/events | https://github.com/huggingface/datasets/issues/1972 | 819,752,761 | MDU6SXNzdWU4MTk3NTI3NjE= | 1,972 | 'Dataset' object has no attribute 'rename_column' | [] | closed | false | null | 1 | 2021-03-02T08:01:49Z | 2022-06-01T16:08:47Z | 2022-06-01T16:08:47Z | null | 'Dataset' object has no attribute 'rename_column' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1972/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1972/timeline | null | completed | null | null | false | [
"Hi ! `rename_column` has been added recently and will be available in the next release"
] |
https://api.github.com/repos/huggingface/datasets/issues/249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/249/comments | https://api.github.com/repos/huggingface/datasets/issues/249/events | https://github.com/huggingface/datasets/issues/249 | 633,393,443 | MDU6SXNzdWU2MzMzOTM0NDM= | 249 | [Dataset created] some critical small issues when I was creating a dataset | [] | closed | false | null | 2 | 2020-06-07T12:58:54Z | 2020-06-12T08:28:51Z | 2020-06-12T08:28:51Z | null | Hi, I successfully created a dataset and has made a pr #248.
But I have encountered several problems when I was creating it, and those should be easy to fix.
1. Not found dataset_info.json
should be fixed by #241 , eager to wait it be merged.
2. Forced to install `apach_beam`
If we should install it, then it might be better to include it in the pakcage dependency or specified in `CONTRIBUTING.md`
```
Traceback (most recent call last):
File "nlp-cli", line 10, in <module>
from nlp.commands.run_beam import RunBeamCommand
File "/home/yisiang/nlp/src/nlp/commands/run_beam.py", line 6, in <module>
import apache_beam as beam
ModuleNotFoundError: No module named 'apache_beam'
```
3. `cached_dir` is `None`
```
File "/home/yisiang/nlp/src/nlp/datasets/bookscorpus/aea0bd5142d26df645a8fce23d6110bb95ecb81772bb2a1f29012e329191962c/bookscorpus.py", line 88, in _split_generators
downloaded_path_or_paths = dl_manager.download_custom(_GDRIVE_FILE_ID, download_file_from_google_drive)
File "/home/yisiang/nlp/src/nlp/utils/download_manager.py", line 128, in download_custom
downloaded_path_or_paths = map_nested(url_to_downloaded_path, url_or_urls)
File "/home/yisiang/nlp/src/nlp/utils/py_utils.py", line 172, in map_nested
return function(data_struct)
File "/home/yisiang/nlp/src/nlp/utils/download_manager.py", line 126, in url_to_downloaded_path
return os.path.join(self._download_config.cache_dir, hash_url_to_filename(url))
File "/home/yisiang/miniconda3/envs/nlppr/lib/python3.7/posixpath.py", line 80, in join
a = os.fspath(a)
```
This is because this line
https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/src/nlp/commands/test.py#L30-L32
And I add `--cache_dir="...."` to `python nlp-cli test datasets/<your-dataset-folder> --save_infos --all_configs` in the doc, finally I could pass this error.
But it seems to ignore my arg and use `/home/yisiang/.cache/huggingface/datasets/bookscorpus/plain_text/1.0.0` as cahe_dir
4. There is no `pytest`
So maybe in the doc we should specify a step to install pytest
5. Not enough capacity in my `/tmp`
When run test for dummy data, I don't know why it ask me for 5.6g to download something,
```
def download_and_prepare
...
if not utils.has_sufficient_disk_space(self.info.size_in_bytes or 0, directory=self._cache_dir_root):
raise IOError(
"Not enough disk space. Needed: {} (download: {}, generated: {})".format(
utils.size_str(self.info.size_in_bytes or 0),
utils.size_str(self.info.download_size or 0),
> utils.size_str(self.info.dataset_size or 0),
)
)
E OSError: Not enough disk space. Needed: 5.62 GiB (download: 1.10 GiB, generated: 4.52 GiB)
```
I add a `processed_temp_dir="some/dir"; raw_temp_dir="another/dir"` to 71, and the test passed
https://github.com/huggingface/nlp/blob/a67a6c422dece904b65d18af65f0e024e839dbe8/tests/test_dataset_common.py#L70-L72
I suggest we can create tmp dir under the `/home/user/tmp` but not `/tmp`, because take our lab server for example, everyone use `/tmp` thus it has not much capacity. Or at least we can improve error message, so the user know is what directory has no space and how many has it lefted. Or we could do both.
6. name of datasets
I was surprised by the dataset name `books_corpus`, and didn't know it is from `class BooksCorpus(nlp.GeneratorBasedBuilder)` . I change it to `Bookscorpus` afterwards. I think this point shold be also on the doc.
7. More thorough doc to how to create `dataset.py`
I believe there will be.
**Feel free to close this issue** if you think these are solved. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/249/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/249/timeline | null | completed | null | null | false | [
"Thanks for noticing all these :) They should be easy to fix indeed",
"Alright I think I fixed all the problems you mentioned. Thanks again, that will be useful for many people.\r\nThere is still more work needed for point 7. but we plan to have some nice docs soon."
] |
https://api.github.com/repos/huggingface/datasets/issues/4084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4084/comments | https://api.github.com/repos/huggingface/datasets/issues/4084/events | https://github.com/huggingface/datasets/issues/4084 | 1,190,060,415 | I_kwDODunzps5G7uF_ | 4,084 | Errors in `Train with Datasets` Tensorflow code section on Huggingface.co | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-04-01T17:02:47Z | 2022-04-04T07:24:37Z | 2022-04-04T07:21:31Z | null | ## Describe the bug
Hi
### Error 1
Running the Tensforlow code on [Huggingface](https://huggingface.co/docs/datasets/use_dataset) gives a TypeError: __init__() got an unexpected keyword argument 'return_tensors'
### Error 2
`DataCollatorWithPadding` isn't imported
## Steps to reproduce the bug
```python
import tensorflow as tf
from datasets import load_dataset
from transformers import AutoTokenizer
dataset = load_dataset('glue', 'mrpc', split='train')
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
dataset = dataset.map(lambda e: tokenizer(e['sentence1'], truncation=True, padding='max_length'), batched=True)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
train_dataset = dataset["train"].to_tf_dataset(
columns=['input_ids', 'token_type_ids', 'attention_mask', 'label'],
shuffle=True,
batch_size=16,
collate_fn=data_collator,
)
```
This is the same code on Huggingface.co
## Actual results
TypeError: __init__() got an unexpected keyword argument 'return_tensors'
## Environment info
- `datasets` version: 2.0.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.7
- PyArrow version: 6.0.0
- Pandas version: 1.4.1
> | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4084/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4084/timeline | null | completed | null | null | false | [
"Hi @blackhat-coder, thanks for reporting.\r\n\r\nPlease note that the `transformers` library updated their data collators API last year (version 4.10.0):\r\n- huggingface/transformers#13105\r\n\r\nnow requiring to pass `return_tensors` argument at Data Collator instantiation.\r\n\r\nAnd therefore, we also updated in the `datasets` library documentation all the examples using `transformers` data collators.\r\n\r\nIf you would like to follow our examples, please update your installed `transformers` version:\r\n```\r\npip install -U transformers\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/15 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/15/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/15/comments | https://api.github.com/repos/huggingface/datasets/issues/15/events | https://github.com/huggingface/datasets/pull/15 | 604,906,708 | MDExOlB1bGxSZXF1ZXN0NDA3NDEwOTk3 | 15 | [Tests] General Test Design for all dataset scripts | [] | closed | false | null | 10 | 2020-04-22T16:46:01Z | 2022-10-04T09:31:54Z | 2020-04-27T14:48:02Z | null | The general idea is similar to how testing is done in `transformers`. There is one general `test_dataset_common.py` file which has a `DatasetTesterMixin` class. This class implements all of the logic that can be used in a generic way for all dataset classes. The idea is to keep each individual dataset test file as minimal as possible.
In order to test whether the specific data set class can download the data and generate the examples **without** downloading the actual data all the time, a MockDataLoaderManager class is used which receives a `mock_folder_structure_fn` function from each individual dataset test file that create "fake" data and which returns the same folder structure that would have been created when using the real data downloader. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/15/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/15/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/15.diff",
"html_url": "https://github.com/huggingface/datasets/pull/15",
"merged_at": "2020-04-27T14:48:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/15.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/15"
} | true | [
"> I think I'm fine with this.\r\n> \r\n> The alternative would be to host a small subset of the dataset on the S3 together with the testing script. But I think having all (test file creation + actual tests) in one file is actually quite convenient.\r\n> \r\n> Good for me!\r\n> \r\n> One question though, will we have to create one test file for each of the 100+ datasets or could we make some automatic conversion from tfds dataset test files?\r\n\r\nI think if we go the way shown in the PR we would have to create one test file for each of the 100+ datasets. \r\n\r\nAs far as I know the tfds test files all rely on the user having created a special download folder structure in `tensorflow-datasets/tensorflow_datasets/testing/test_data/fake_examples`. \r\n\r\nMy hypothesis was: \r\nBecasue, we don't want to work with PRs, no `dataset_script` is going to be in the official repo, so no `dataset_script_test` can be in the repo either. Therefore we can also not have any \"fake\" test folder structure in the repo. \r\n\r\n**BUT:** As you mentioned @thom, we could have a fake data structure on AWS. To add a test the user has to upload multiple small test files when uploading his data set script. \r\n\r\nSo for a cli this could look like:\r\n`python nlp-cli upload <data_set_script> --testfiles <relative path to test file 1> <relative path to test file 2> ...` \r\n\r\nor even easier if the user just creates the dataset folder with the script inside and the testing folder structure, then the API could look like:\r\n\r\n`python nlp-cli upload <path/to/dataset/folder>`\r\n\r\nand the dataset folder would look like\r\n```\r\nsquad\r\n- squad.py\r\n- fake_data # this dir would have to have the exact same structure we get when downloading from the official squad data url\r\n```\r\n\r\nThis way I think we wouldn't even need any test files at all for each dataset script. For special datasets like `c4` or `wikipedia` we could then allow to optionally upload another test script. \r\nWe just assume that this is our downloaded `url` and check all functionality from there. \r\n\r\nThinking a bit more about this solution sounds a) much less work and b) even easier for the user.\r\n\r\nA small problem I see here though:\r\n1) What do we do when the depending on the config name the downloaded folder structure is very different? I think for each dataset config name we should have one test, which could correspond to one \"fake\" folder structure on AWS\r\n\r\n@thomwolf What do you think? I would actually go for this solution instead now.\r\n@mariamabarham You have written many more tfds dataset scripts and tests than I have - what do you think? \r\n\r\n",
"Regarding the tfds tests, I don't really see a point in keeping them because:\r\n\r\n1) If you provide a fake data structure, IMO there is no need for each dataset to have an individual test file because (I think) most datasets have the same functions `_split_generators` and `_generate_examples` for which you can just test the functionality in a common test file. For special functions like these beam / pipeline functionality you probably need an extra test file. But @mariamabarham I think you have seen more than I have here as well \r\n\r\n2) The dataset test design is very much intertwined with the download manager design and contains a lot of code. I would like to seperate the tests into a) tests for downloading in general b) tests for post download data set pre-processing. Since we are going to change the download code anyways quite a lot, my plan was to focus on b) first. ",
"I like the idea of having a fake data folder on S3. I have seen datasets with nested compressed files structures that would be tedious to generate with code. And for users it is probably easier to create a fake data folder by taking a subset of the actual data, and then upload it as you said.",
"> > I think I'm fine with this.\r\n> > The alternative would be to host a small subset of the dataset on the S3 together with the testing script. But I think having all (test file creation + actual tests) in one file is actually quite convenient.\r\n> > Good for me!\r\n> > One question though, will we have to create one test file for each of the 100+ datasets or could we make some automatic conversion from tfds dataset test files?\r\n> \r\n> I think if we go the way shown in the PR we would have to create one test file for each of the 100+ datasets.\r\n> \r\n> As far as I know the tfds test files all rely on the user having created a special download folder structure in `tensorflow-datasets/tensorflow_datasets/testing/test_data/fake_examples`.\r\n> \r\n> My hypothesis was:\r\n> Becasue, we don't want to work with PRs, no `dataset_script` is going to be in the official repo, so no `dataset_script_test` can be in the repo either. Therefore we can also not have any \"fake\" test folder structure in the repo.\r\n> \r\n> **BUT:** As you mentioned @thom, we could have a fake data structure on AWS. To add a test the user has to upload multiple small test files when uploading his data set script.\r\n> \r\n> So for a cli this could look like:\r\n> `python nlp-cli upload <data_set_script> --testfiles <relative path to test file 1> <relative path to test file 2> ...`\r\n> \r\n> or even easier if the user just creates the dataset folder with the script inside and the testing folder structure, then the API could look like:\r\n> \r\n> `python nlp-cli upload <path/to/dataset/folder>`\r\n> \r\n> and the dataset folder would look like\r\n> \r\n> ```\r\n> squad\r\n> - squad.py\r\n> - fake_data # this dir would have to have the exact same structure we get when downloading from the official squad data url\r\n> ```\r\n> \r\n> This way I think we wouldn't even need any test files at all for each dataset script. For special datasets like `c4` or `wikipedia` we could then allow to optionally upload another test script.\r\n> We just assume that this is our downloaded `url` and check all functionality from there.\r\n> \r\n> Thinking a bit more about this solution sounds a) much less work and b) even easier for the user.\r\n> \r\n> A small problem I see here though:\r\n> \r\n> 1. What do we do when the depending on the config name the downloaded folder structure is very different? I think for each dataset config name we should have one test, which could correspond to one \"fake\" folder structure on AWS\r\n> \r\n> @thomwolf What do you think? I would actually go for this solution instead now.\r\n> @mariamabarham You have written many more tfds dataset scripts and tests than I have - what do you think?\r\n\r\nI'm agreed with you just one thing, for some dataset like glue or xtreme you may have multiple datasets in it. so I think a good way is to have one main fake folder and a subdirectory for each dataset inside",
"> Regarding the tfds tests, I don't really see a point in keeping them because:\r\n> \r\n> 1. If you provide a fake data structure, IMO there is no need for each dataset to have an individual test file because (I think) most datasets have the same functions `_split_generators` and `_generate_examples` for which you can just test the functionality in a common test file. For special functions like these beam / pipeline functionality you probably need an extra test file. But @mariamabarham I think you have seen more than I have here as well\r\n> 2. The dataset test design is very much intertwined with the download manager design and contains a lot of code. I would like to seperate the tests into a) tests for downloading in general b) tests for post download data set pre-processing. Since we are going to change the download code anyways quite a lot, my plan was to focus on b) first.\r\n\r\nFor _split_generator, yes. But I'm not sure for _generate_examples because there is lots of things that should be taken into account such as feature names and types, data format (json, jsonl, csv, tsv,..)",
"Sounds good to me!\r\n\r\nWhen testing, we could thus just override the prefix in the URL inside the download manager to have them point to the test directory on our S3.\r\n\r\nCc @lhoestq ",
"Ok, here is a second draft for the testing structure. \r\n\r\nI think the big difficulty here is \"How can you generate tests on the fly from a given dataset name, *e.g.* `squad`\"?\r\n\r\nSo, this morning I did some research on \"parameterized testing\" and pure `unittest` or `pytest` didn't work very well. \r\nI found the lib https://github.com/wolever/parameterized, which works very nicely for our use case I think. \r\n@thomwolf - would it be ok to have a dependence on this lib for `nlp`? It seems like a light-weight lib to me. \r\n\r\nThis lib allows to add a `parameterization` decorator to a `unittest.TestCase` class so that the class can be instantiated for multiple different arguments (which are the dataset names `squad` etc. in our case).\r\n\r\nWhat I like about this lib is that one only has to add the decorator and the each of the parameterized tests are shown, like this: \r\n\r\n\r\n\r\nWith this structure we would only have to upload the dummy data for each dataset and would not require a specific testing file. \r\n\r\nWhat do you think @thomwolf @mariamabarham @lhoestq ?",
"I think this is a nice solution.\r\n\r\nDo you think we could have the `parametrized` dependency in a `[test]` optional installation of `setup.py`? I would really like to keep the dependencies of the standard installation as small as possible. ",
"> I think this is a nice solution.\r\n> \r\n> Do you think we could have the `parametrized` dependency in a `[test]` optional installation of `setup.py`? I would really like to keep the dependencies of the standard installation as small as possible.\r\n\r\nYes definitely!",
"UPDATE: \r\n\r\nThis test design is ready now. I added dummy data to S3 for the dataests: `squad, crime_and_punish, sentiment140` . The structure can be seen on `https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/squad/dummy/?region=us-east-1&tab=overview` for `squad`. \r\n\r\nAll dummy data files have to be in .zip format and called `dummy_data.zip`. The zip file should thereby have the exact same folder structure one gets from downloading the \"real\" data url(s). \r\n\r\nTo show how the .zip file looks like for the added datasets, I added the folder `nlp/datasets/dummy_data` in this PR. I think we can leave for the moment so that people can see better how to add dummy data tests and later delete it like `nlp/datasets/nlp`."
] |
https://api.github.com/repos/huggingface/datasets/issues/84 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/84/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/84/comments | https://api.github.com/repos/huggingface/datasets/issues/84/events | https://github.com/huggingface/datasets/pull/84 | 617,249,815 | MDExOlB1bGxSZXF1ZXN0NDE3MjAxODcz | 84 | [TedHrLr] add left dummy data | [] | closed | false | null | 0 | 2020-05-13T08:27:20Z | 2020-05-13T08:29:22Z | 2020-05-13T08:29:21Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/84/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/84/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/84.diff",
"html_url": "https://github.com/huggingface/datasets/pull/84",
"merged_at": "2020-05-13T08:29:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/84.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/84"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/5721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5721/comments | https://api.github.com/repos/huggingface/datasets/issues/5721/events | https://github.com/huggingface/datasets/issues/5721 | 1,659,680,682 | I_kwDODunzps5i7Leq | 5,721 | Calling datasets.load_dataset("text" ...) results in a wrong split. | [] | open | false | null | 0 | 2023-04-08T23:55:12Z | 2023-04-08T23:55:12Z | null | null | ### Describe the bug
When creating a text dataset, the training split should have the bulk of the examples by default. Currently, testing does.
### Steps to reproduce the bug
I have a folder with 18K text files in it. Each text file essentially consists in a document or article scraped from online. Calling the following codeL
```
folder_path = "/home/cyril/Downloads/llama_dataset"
data = datasets.load_dataset("text", data_dir=folder_path)
data.save_to_disk("/home/cyril/Downloads/data.hf")
data = datasets.load_from_disk("/home/cyril/Downloads/data.hf")
print(data)
```
Results in the following split:
```
DatasetDict({
train: Dataset({
features: ['text'],
num_rows: 2114
})
test: Dataset({
features: ['text'],
num_rows: 200882
})
validation: Dataset({
features: ['text'],
num_rows: 152
})
})
```
It seems to me like the train/test/validation splits are in the wrong order since test split >>>> train_split
### Expected behavior
Train split should have the bulk of the training examples.
### Environment info
datasets 2.11.0, python 3.10.6 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5721/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5721/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5281/comments | https://api.github.com/repos/huggingface/datasets/issues/5281/events | https://github.com/huggingface/datasets/issues/5281 | 1,459,930,271 | I_kwDODunzps5XBMSf | 5,281 | Support cloud storage in load_dataset | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | open | false | null | 13 | 2022-11-22T14:00:10Z | 2023-05-10T12:20:44Z | null | null | Would be nice to be able to do
```python
load_dataset("s3://...")
```
or even
```python
data_files=["gs://..."]
storage_options = {...}
load_dataset(..., data_files=data_files, storage_options=storage_options)
```
The idea would be to use `fsspec` as in `download_and_prepare` and `save_to_disk`.
This has been requested several times already. Some users want to use their data from private cloud storage to train models
related:
https://github.com/huggingface/datasets/issues/3490
https://github.com/huggingface/datasets/issues/5244
[forum](https://discuss.huggingface.co/t/how-to-use-s3-path-with-load-dataset-with-streaming-true/25739/2) | {
"+1": 17,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 10,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 27,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5281/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5281/timeline | null | reopened | null | null | false | [
"Or for example an archive on GitHub releases! Before I added support for JXL (locally only, PR still pending) I was considering hosting my files on GitHub instead...",
"+1 to this. I would like to use 'audiofolder' with a data_dir that's on S3, for example. I don't want to upload my dataset to the Hub, but I would find all the fingerprinting/caching features useful.",
"Adding to the conversation, Dask also uses `fsspec` for this feature.\r\n\r\n[Dask: How to connect to remote data](https://docs.dask.org/en/stable/how-to/connect-to-remote-data.html)\r\n\r\nHappy to help on this feature :D ",
"+1 to this feature request since I think it also tackles my use-case. I am collaborating with a team, working with a loading script which takes some time to generate the dataset artifacts. It would be very handy to use this as a cloud cache to avoid duplicating the effort. \r\n\r\nCurrently we could use `builder.download_and_prepare(path_to_cloud_storage, storage_options, ...)` to cache the artifacts to cloud storage, but then `builder.as_dataset()` yields `NotImplementedError: Loading a dataset cached in SomeCloudFileSystem is not supported`",
"Makes sense ! If you want to load locally a dataset that you download_and_prepared on a cloud storage, you would use `load_dataset(path_to_cloud_storage)` indeed. It would download the data from the cloud storage, cache them locally, and return a `Dataset`.",
"It seems currently the `cached_path` function handles all URLs by `get_from_cache` that only supports `ftp` and `http(s)` here:\r\nhttps://github.com/huggingface/datasets/blob/b5672a956d5de864e6f5550e493527d962d6ae55/src/datasets/utils/file_utils.py#L181\r\n\r\nI guess one can add another condition that handles `s3://` or `gs://` URLs via `fsspec` here.",
"I could use this functionality, so I put together a PR using @kyamagu's suggestion to use `fsspec` in `datasets.utils.file_utils`\r\n\r\nhttps://github.com/huggingface/datasets/pull/5580",
"Thanks @dwyatte for adding support for fsspec urls\r\n\r\nLet me just reopen this since the original issue is not resolved",
"I'm not yet understanding how to use https://github.com/huggingface/datasets/pull/5580 in order to use `load_dataset(data_files=\"s3://...\")`. Any help/example would be much appreciated :) thanks! ",
"It's still not officially supported x) But you can try to update `request_etag` in `file_utils.py` to use `fsspec_head` instead of `http_head`. It is responsible of getting the ETags of the remote files for caching. This change may do the trick for S3 urls",
"Thank you for your guys help on this and merging in #5580. I manually pulled the changes to my local datasets package (datasets.utils.file_utils.py) since it only seemed to be this file that was changed in the PR and I'm getting the error: \r\nInvalidSchema: No connection adapters were found for 's3://bucket/folder/'. I'm calling load_dataset using the S3 URI. When I use the S3 URL I get HTTPError: 403 Client Error. \r\nAm I not supposed to use the S3 URI? How do I pull in the changes from this merge? I'm running datasets 2.10.1. ",
"The current implementation depends on gcsfs/s3fs being able to authenticate through some other means e.g., environmental variables. For AWS, it looks like you can set `AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, and `AWS_SESSION_TOKEN`\r\n\r\nNote that while testing this just now, I did note a discrepancy between gcsfs and s3fs that we might want to address where gcsfs passes the timeout from `storage_options` [here](https://github.com/huggingface/datasets/blob/3e6269979fc80ae8939294d26298897f0db5b84d/src/datasets/utils/file_utils.py#L333) down into the `aiohttp.ClientSession.request`, but s3fs does not handle this (tries to pass to the `aiobotocore.session.AioSession` constructor raising `TypeError: __init__() got an unexpected keyword argument 'requests_timeout'`).\r\n\r\nIt seems like some work trying to unify kwargs across different fsspec implementations, so if the plan is to pass down `storage_options`, I wonder if we should just let users control the timeout (and other kwargs) using that and if not specified, use the default?",
"> Note that while testing this just now, I did note a discrepancy between gcsfs and s3fs that we might want to address where gcsfs passes the timeout from storage_options [here](https://github.com/huggingface/datasets/blob/3e6269979fc80ae8939294d26298897f0db5b84d/src/datasets/utils/file_utils.py#L333) down into the aiohttp.ClientSession.request, but s3fs does not handle this (tries to pass to the aiobotocore.session.AioSession constructor raising TypeError: __init__() got an unexpected keyword argument 'requests_timeout').\r\n\r\n> It seems like some work trying to unify kwargs across different fsspec implementations, so if the plan is to pass down storage_options, I wonder if we should just let users control the timeout (and other kwargs) and if not specified, use the default?\r\n\r\n@lhoestq here's a small PR for this: https://github.com/huggingface/datasets/pull/5673\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1074/comments | https://api.github.com/repos/huggingface/datasets/issues/1074/events | https://github.com/huggingface/datasets/pull/1074 | 756,483,172 | MDExOlB1bGxSZXF1ZXN0NTMyMDIyNTIy | 1,074 | Swedish MT STS-B | [] | closed | false | null | 0 | 2020-12-03T19:06:25Z | 2020-12-04T20:22:27Z | 2020-12-03T20:44:28Z | null | Added a Swedish machine translated version of the well known STS-B Corpus | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1074/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1074/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1074.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1074",
"merged_at": "2020-12-03T20:44:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1074.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1074"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4053 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4053/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4053/comments | https://api.github.com/repos/huggingface/datasets/issues/4053/events | https://github.com/huggingface/datasets/issues/4053 | 1,184,500,378 | I_kwDODunzps5Gmgqa | 4,053 | Modify datatype from `int32` to `float` for pearsonr, spearmanr. | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 1 | 2022-03-29T08:27:41Z | 2022-03-29T14:02:20Z | 2022-03-29T14:02:20Z | null | **Is your feature request related to a problem? Please describe.**
- Now [Pearsonr](https://github.com/huggingface/datasets/blob/master/metrics/pearsonr/pearsonr.py) and [Spearmanr](https://github.com/huggingface/datasets/blob/master/metrics/spearmanr/spearmanr.py) both get input data as 'int32'.
**Describe the solution you'd like**
- Considering that those metrics are widely used for the STS task(labels are in 'float' data type),
it would be better to modify datatype from 'int32' to 'float' for getting exact values of similarity. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4053/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4053/timeline | null | completed | null | null | false | [
"@Woodywarhol9 good catch, thanks for reporting.\r\n\r\nWe are fixing this."
] |
https://api.github.com/repos/huggingface/datasets/issues/3377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3377/comments | https://api.github.com/repos/huggingface/datasets/issues/3377/events | https://github.com/huggingface/datasets/pull/3377 | 1,070,562,907 | PR_kwDODunzps4vXCHn | 3,377 | COCO 🥥 on the 🤗 Hub? | [] | closed | false | null | 4 | 2021-12-03T12:55:27Z | 2021-12-20T14:14:01Z | 2021-12-20T14:14:00Z | null | This is a draft PR since I ran into few small problems.
I referred to this TFDS code: https://github.com/tensorflow/datasets/blob/2538a08c184d53b37bfcf52cc21dd382572a88f4/tensorflow_datasets/object_detection/coco.py
cc: @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3377/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3377/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/3377.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3377",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3377.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3377"
} | true | [
"@mariosasko I fixed couple of bugs",
"TO-DO: \r\n- [x] Add unlabeled 2017 splits, train and validation splits of 2015\r\n- [x] Add Class Labels as list instead",
"@mariosasko added fine & coarse grained labels, will fix the bugs (currently getting set up with VM, my internet is too slow to run the tests and download the data 🥲)",
"migrated to here https://github.com/huggingface/datasets/tree/coco"
] |
https://api.github.com/repos/huggingface/datasets/issues/2277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2277/comments | https://api.github.com/repos/huggingface/datasets/issues/2277/events | https://github.com/huggingface/datasets/pull/2277 | 870,071,994 | MDExOlB1bGxSZXF1ZXN0NjI1MzI5NjIz | 2,277 | Create CacheManager | [
{
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior",
"id": 2851292821,
"name": "refactoring",
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring"
}
] | open | false | {
"closed_at": null,
"closed_issues": 2,
"created_at": "2021-07-21T15:34:56Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-30T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"id": 6968069,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"open_issues": 4,
"state": "open",
"title": "1.12",
"updated_at": "2021-10-13T10:26:33Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8"
} | 0 | 2021-04-28T15:23:42Z | 2022-07-06T15:19:48Z | null | null | Perform refactoring to decouple cache functionality (method `as_dataset`). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2277/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2277/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2277.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2277",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2277.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2277"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5204/comments | https://api.github.com/repos/huggingface/datasets/issues/5204/events | https://github.com/huggingface/datasets/issues/5204 | 1,437,221,259 | I_kwDODunzps5VqkGL | 5,204 | `push_to_hub` not propagating `token` through `DownloadConfig` | [] | closed | false | null | 3 | 2022-11-05T23:32:20Z | 2022-11-08T10:12:09Z | 2022-11-08T10:12:08Z | null | ### Describe the bug
When trying to upload a new 🤗 Dataset to the Hub via Python, and providing the `token` as a parameter to the `Dataset.push_to_hub` function, it just works for the first time, assuming that the dataset didn't exist before.
But when trying to run `Dataset.push_to_hub` again over the same dataset, instead of updating it, it throws a `ConnectionError` when trying to retrieve the `README.md` that may contain some metadata about the dataset, so as to also update it, but since the `token` is not propagated, the `DownloadConfig` provided to the `datasets.utils.file_utils.get_from_cache` function doesn't contain the `use_auth_token` value set to `token`, it's just using the default one which is None/False.
So on, when uploading a dataset via Python with `push_to_hub` with the `token` as a parameter with the HuggingFace API Token as value, it can just be uploaded when the dataset is new, otherwise it fails with to `ConnectionError` due to the `token` not being propagated as `use_auth_token`.
### Steps to reproduce the bug
Let's create a new dataset in our HF account via Python as:
```python
from datasets import Dataset
data = {"a": [1, 2, 3], "b": [4, 5, 6]}
ds = Dataset.from_dict(data)
ds.push_to_hub(repo_id=<HF_USERNAME>/<HF_DATASET>, private=private, token=<HF_TOKEN_HERE>)
```
When we create the `Dataset` for the first time it works and there are no issues, but when trying to actually upload a new version of the same dataset (same name under the same username), we encounter the following issue:
```python
from datasets import Dataset
data = {"a": [1, 2, 3], "b": [4, 5, 6]}
ds = Dataset.from_dict(data)
ds.push_to_hub(repo_id=<HF_USERNAME>/<HF_DATASET>, private=private, token=<HF_TOKEN_HERE>)
>>> ConnectionError: Couldn't reach https://huggingface.co/datasets/alvarobartt/demo/resolve/main/README.md (ConnectionError('Unauthorized for URL https://huggingface.co/datasets/<HF_USERNAME>/<HF_DATASET>/resolve/main/README.md. Please use the parameter `use_auth_token=True` after logging in with `huggingface-cli login`'))
```
### Expected behavior
Ideally, the `token` parameter provided to `push_to_hub` should be propagated and used to download the `README.md` when trying to update a `Dataset`, instead of throwing that exception, so that the authentication can be done directly through code without running `huggingface-cli login`as mentioned at https://huggingface.co/docs/datasets/upload_dataset#upload-with-python.
### Environment info
- `datasets` version: 2.6.1
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.10.8
- PyArrow version: 10.0.0
- Pandas version: 1.5.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5204/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5204/timeline | null | completed | null | null | false | [
"#self-assign",
"@lhoestq can you close this issue as part of the recent #5205 merge? Thanks 🤗 ",
"Thank you :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/534/comments | https://api.github.com/repos/huggingface/datasets/issues/534/events | https://github.com/huggingface/datasets/issues/534 | 686,115,912 | MDU6SXNzdWU2ODYxMTU5MTI= | 534 | `list_datasets()` is broken. | [] | closed | false | null | 3 | 2020-08-26T08:19:01Z | 2020-08-27T06:31:11Z | 2020-08-27T06:31:11Z | null | version = '0.4.0'
`list_datasets()` is broken. It results in the following error :
```
In [3]: nlp.list_datasets()
Out[3]: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj)
700 type_pprinters=self.type_printers,
701 deferred_pprinters=self.deferred_printers)
--> 702 printer.pretty(obj)
703 printer.flush()
704 return stream.getvalue()
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj)
375 if cls in self.type_pprinters:
376 # printer registered in self.type_pprinters
--> 377 return self.type_pprinters[cls](obj, self, cycle)
378 else:
379 # deferred printer
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in inner(obj, p, cycle)
553 p.text(',')
554 p.breakable()
--> 555 p.pretty(x)
556 if len(obj) == 1 and type(obj) is tuple:
557 # Special case for 1-item tuples.
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj)
392 if cls is not object \
393 and callable(cls.__dict__.get('__repr__')):
--> 394 return _repr_pprint(obj, self, cycle)
395
396 return _default_pprint(obj, self, cycle)
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in _repr_pprint(obj, p, cycle)
698 """A pprint that just redirects to the normal repr function."""
699 # Find newlines and replace them with p.break_()
--> 700 output = repr(obj)
701 lines = output.splitlines()
702 with p.group():
~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/nlp/hf_api.py in __repr__(self)
110
111 def __repr__(self):
--> 112 single_line_description = self.description.replace("\n", "")
113 return f"nlp.ObjectInfo(id='{self.id}', description='{single_line_description}', files={self.siblings})"
114
AttributeError: 'NoneType' object has no attribute 'replace'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/534/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/534/timeline | null | completed | null | null | false | [
"Thanks for reporting !\r\nThis has been fixed in #475 and the fix will be available in the next release",
"What you can do instead to get the list of the datasets is call\r\n\r\n```python\r\nprint([dataset.id for dataset in nlp.list_datasets()])\r\n```",
"Thanks @lhoestq . "
] |
https://api.github.com/repos/huggingface/datasets/issues/2803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2803/comments | https://api.github.com/repos/huggingface/datasets/issues/2803/events | https://github.com/huggingface/datasets/pull/2803 | 970,858,928 | MDExOlB1bGxSZXF1ZXN0NzEyNzQxODMz | 2,803 | add stack exchange | [] | closed | false | null | 2 | 2021-08-14T08:11:02Z | 2021-08-19T10:07:33Z | 2021-08-19T08:07:38Z | null | stack exchange is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components.
I also change default `timeout` to 100 seconds instead of 10 seconds, otherwise I keep getting read time out when downloading source data of stack exchange and cc100 dataset.
When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797
Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2803/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2803/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2803.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2803",
"merged_at": "2021-08-19T08:07:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2803.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2803"
} | true | [
"Hi ! Merging this one since it's all good :)\r\n\r\nHowever I think it would also be better to actually rename it `the_pile_stack_exchange` to make things clearer and to avoid name collisions in the future. I would like to do the same for `books3` as well.\r\n\r\nIf you don't mind I'll open a PR to do the renaming",
"\r\n> If you don't mind I'll open a PR to do the renaming\r\n\r\n@lhoestq That will be nice !!\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5456/comments | https://api.github.com/repos/huggingface/datasets/issues/5456/events | https://github.com/huggingface/datasets/pull/5456 | 1,553,905,148 | PR_kwDODunzps5IXq92 | 5,456 | feat: tqdm for `to_parquet` | [] | closed | false | null | 2 | 2023-01-23T22:05:38Z | 2023-01-24T11:26:47Z | 2023-01-24T11:17:12Z | null | As described in #5418
I noticed also that the `to_json` function supports multi-workers whereas `to_parquet`, is that not possible/not needed with Parquet or something that hasn't been implemented yet? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5456/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5456/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5456.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5456",
"merged_at": "2023-01-24T11:17:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5456.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5456"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012395 / 0.011353 (0.001042) | 0.006466 / 0.011008 (-0.004542) | 0.127605 / 0.038508 (0.089097) | 0.044929 / 0.023109 (0.021820) | 0.399856 / 0.275898 (0.123958) | 0.491341 / 0.323480 (0.167861) | 0.009193 / 0.007986 (0.001207) | 0.005419 / 0.004328 (0.001090) | 0.100577 / 0.004250 (0.096327) | 0.045338 / 0.037052 (0.008286) | 0.409970 / 0.258489 (0.151481) | 0.452941 / 0.293841 (0.159100) | 0.054350 / 0.128546 (-0.074197) | 0.019069 / 0.075646 (-0.056578) | 0.427036 / 0.419271 (0.007765) | 0.073616 / 0.043533 (0.030083) | 0.395384 / 0.255139 (0.140245) | 0.442381 / 0.283200 (0.159181) | 0.123185 / 0.141683 (-0.018498) | 1.797640 / 1.452155 (0.345485) | 1.888860 / 1.492716 (0.396143) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211041 / 0.018006 (0.193035) | 0.539350 / 0.000490 (0.538860) | 0.001683 / 0.000200 (0.001483) | 0.000118 / 0.000054 (0.000064) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031699 / 0.037411 (-0.005712) | 0.132696 / 0.014526 (0.118170) | 0.133710 / 0.176557 (-0.042846) | 0.190074 / 0.737135 (-0.547061) | 0.142919 / 0.296338 (-0.153420) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.643521 / 0.215209 (0.428312) | 6.137350 / 2.077655 (4.059695) | 2.463894 / 1.504120 (0.959774) | 2.120043 / 1.541195 (0.578848) | 2.121898 / 1.468490 (0.653408) | 1.287319 / 4.584777 (-3.297458) | 5.517864 / 3.745712 (1.772151) | 5.070820 / 5.269862 (-0.199042) | 2.948967 / 4.565676 (-1.616710) | 0.175861 / 0.424275 (-0.248415) | 0.015292 / 0.007607 (0.007685) | 0.843195 / 0.226044 (0.617150) | 7.884275 / 2.268929 (5.615347) | 3.182821 / 55.444624 (-52.261803) | 2.576093 / 6.876477 (-4.300384) | 2.537160 / 2.142072 (0.395088) | 1.510029 / 4.805227 (-3.295198) | 0.249404 / 6.500664 (-6.251260) | 0.080434 / 0.075469 (0.004965) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.618695 / 1.841788 (-0.223093) | 18.879207 / 8.074308 (10.804899) | 21.075272 / 10.191392 (10.883880) | 0.260781 / 0.680424 (-0.419643) | 0.046387 / 0.534201 (-0.487813) | 0.570709 / 0.579283 (-0.008574) | 0.619050 / 0.434364 (0.184686) | 0.642295 / 0.540337 (0.101958) | 0.780070 / 1.386936 (-0.606866) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010418 / 0.011353 (-0.000935) | 0.006104 / 0.011008 (-0.004905) | 0.133609 / 0.038508 (0.095101) | 0.035101 / 0.023109 (0.011992) | 0.471931 / 0.275898 (0.196033) | 0.504498 / 0.323480 (0.181018) | 0.007388 / 0.007986 (-0.000598) | 0.004852 / 0.004328 (0.000523) | 0.094535 / 0.004250 (0.090284) | 0.056832 / 0.037052 (0.019779) | 0.470513 / 0.258489 (0.212024) | 0.531285 / 0.293841 (0.237444) | 0.058271 / 0.128546 (-0.070276) | 0.020523 / 0.075646 (-0.055123) | 0.437398 / 0.419271 (0.018126) | 0.065390 / 0.043533 (0.021857) | 0.503702 / 0.255139 (0.248563) | 0.515876 / 0.283200 (0.232677) | 0.118615 / 0.141683 (-0.023068) | 1.865380 / 1.452155 (0.413225) | 1.990316 / 1.492716 (0.497600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246772 / 0.018006 (0.228766) | 0.560607 / 0.000490 (0.560118) | 0.005675 / 0.000200 (0.005475) | 0.000142 / 0.000054 (0.000088) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034692 / 0.037411 (-0.002719) | 0.174016 / 0.014526 (0.159490) | 0.179838 / 0.176557 (0.003282) | 0.217118 / 0.737135 (-0.520018) | 0.184811 / 0.296338 (-0.111527) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.675970 / 0.215209 (0.460760) | 6.787039 / 2.077655 (4.709384) | 2.932619 / 1.504120 (1.428499) | 2.545076 / 1.541195 (1.003882) | 2.566705 / 1.468490 (1.098215) | 1.287365 / 4.584777 (-3.297412) | 5.468441 / 3.745712 (1.722729) | 5.227726 / 5.269862 (-0.042136) | 2.868970 / 4.565676 (-1.696706) | 0.153535 / 0.424275 (-0.270740) | 0.020087 / 0.007607 (0.012480) | 0.860562 / 0.226044 (0.634518) | 8.656109 / 2.268929 (6.387180) | 3.749424 / 55.444624 (-51.695200) | 3.011337 / 6.876477 (-3.865139) | 3.119045 / 2.142072 (0.976973) | 1.562174 / 4.805227 (-3.243053) | 0.279161 / 6.500664 (-6.221504) | 0.084905 / 0.075469 (0.009436) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.638684 / 1.841788 (-0.203104) | 18.834760 / 8.074308 (10.760452) | 21.554310 / 10.191392 (11.362918) | 0.274518 / 0.680424 (-0.405906) | 0.030343 / 0.534201 (-0.503858) | 0.539094 / 0.579283 (-0.040189) | 0.627258 / 0.434364 (0.192895) | 0.624638 / 0.540337 (0.084301) | 0.742776 / 1.386936 (-0.644160) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1901/comments | https://api.github.com/repos/huggingface/datasets/issues/1901/events | https://github.com/huggingface/datasets/pull/1901 | 810,845,605 | MDExOlB1bGxSZXF1ZXN0NTc1NDY5MDUy | 1,901 | Fix OPUS dataset download errors | [] | closed | false | null | 0 | 2021-02-18T07:39:41Z | 2021-02-18T15:07:20Z | 2021-02-18T09:39:21Z | null | Replace http to https.
https://github.com/huggingface/datasets/issues/854
https://discuss.huggingface.co/t/cannot-download-wmt16/2081
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1901/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1901/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1901.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1901",
"merged_at": "2021-02-18T09:39:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1901.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1901"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1986/comments | https://api.github.com/repos/huggingface/datasets/issues/1986/events | https://github.com/huggingface/datasets/issues/1986 | 822,176,290 | MDU6SXNzdWU4MjIxNzYyOTA= | 1,986 | wmt datasets fail to load | [] | closed | false | null | 1 | 2021-03-04T14:18:55Z | 2021-03-04T14:31:07Z | 2021-03-04T14:31:07Z | null | ~\.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager)
758 # Extract manually downloaded files.
759 manual_files = dl_manager.extract(manual_paths_dict)
--> 760 extraction_map = dict(downloaded_files, **manual_files)
761
762 for language in self.config.language_pair:
TypeError: type object argument after ** must be a mapping, not list | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1986/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1986/timeline | null | completed | null | null | false | [
"caching issue, seems to work again.."
] |
https://api.github.com/repos/huggingface/datasets/issues/5325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5325/comments | https://api.github.com/repos/huggingface/datasets/issues/5325/events | https://github.com/huggingface/datasets/issues/5325 | 1,471,536,822 | I_kwDODunzps5Xtd62 | 5,325 | map(...batch_size=None) for IterableDataset | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | 5 | 2022-12-01T15:43:42Z | 2022-12-07T15:54:43Z | 2022-12-07T15:54:42Z | null | ### Feature request
Dataset.map(...) allows batch_size to be None. It would be nice if IterableDataset did too.
### Motivation
Although it may seem a bit of a spurious request given that `IterableDataset` is meant for larger than memory datasets, but there are a couple of reasons why this might be nice.
One is that load_dataset(...) can return either IterableDataset or Dataset. mypy will then complain if batch_size=None even if we know it is Dataset. Of course we can do:
assert isinstance(d, datasets.DatasetDict)
But it is a mild inconvenience. What's more annoying is that whenever we use something like e.g. `combine_datasets(...)`, we end up with the union again, and so have to do the assert again.
Another is that we could actually end up with an IterableDataset small enough for memory in normal/correct usage, e.g. by filtering a massive IterableDataset.
For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this.
### Your contribution
Not this time. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5325/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5325/timeline | null | completed | null | null | false | [
"Hi! I agree it makes sense for `IterableDataset.map` to support the `batch_size=None` case. This should be super easy to fix.",
"@mariosasko as this is something simple maybe I can include it as part of https://github.com/huggingface/datasets/pull/5311? Let me know :+1:",
"#self-assign",
"Feel free to close this @lhoestq as part of https://github.com/huggingface/datasets/pull/5336 :hugs:",
"Thanks again :)\r\n\r\n> For practical usages, an alternative to this would be to convert from an iterable dataset to a map-style dataset, but it is not obvious how to do this.\r\n\r\nThis is interesting as well, if anyone wants to explore"
] |
https://api.github.com/repos/huggingface/datasets/issues/3480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3480/comments | https://api.github.com/repos/huggingface/datasets/issues/3480/events | https://github.com/huggingface/datasets/issues/3480 | 1,088,267,110 | I_kwDODunzps5A3aNm | 3,480 | the compression format requested when saving a dataset in json format is not respected | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-12-24T09:23:51Z | 2022-01-05T13:03:35Z | 2022-01-05T13:03:35Z | null | ## Describe the bug
In the documentation of the `to_json` method, it is stated in the parameters that
> **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json.
however when we pass for example `compression="gzip"`, the saved file is not compressed.
Would you also have expected compression to be applied? :relaxed:
## Steps to reproduce the bug
```python
my_dict = {"a": [1, 2, 3], "b": [1, 2, 3]}
```
### Result with datasets
```python
from datasets import Dataset
dataset = Dataset.from_dict(my_dict)
dataset.to_json("dic_with_datasets.jsonl.gz", compression="gzip")
!cat dic_with_datasets.jsonl.gz
```
output
```
{"a":1,"b":1}
{"a":2,"b":2}
{"a":3,"b":3}
```
Note: I would expected to see binary data here
### Result with pandas
```python
import pandas as pd
df = pd.DataFrame(my_dict)
df.to_json("dic_with_pandas.jsonl.gz", lines=True, orient="records", compression="gzip")
!cat dic_with_pandas.jsonl.gz
```
output
```
4��a�dic_with_pandas.jsonl��VJT�2�QJ��\� ��g��yƵ���������)���
```
Note: It looks like binary data
## Expected results
I would have expected that the saved result with datasets would also be a binary file
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1
- Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.11
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3480/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3480/timeline | null | completed | null | null | false | [
"Thanks for reporting @SaulLu.\r\n\r\nAt first sight I think the problem is caused because `pandas` only takes into account the `compression` parameter if called with a non-null file path or buffer. And in our implementation, we call pandas `to_json` with `None` `path_or_buf`.\r\n\r\nWe should fix this:\r\n- either handling directly the `compression` parameter ourselves\r\n- or refactoring to pass non-null path or buffer to pandas\r\n\r\nCC: @lhoestq",
"I was thinking if we can handle the `compression` parameter by ourselves? Compression types will be similar to what `pandas` offer. Initially, we can try this with 2-3 compression types and see how good/bad it is? Let me know if it sounds good, I can raise a PR for this next week",
"Hi ! Thanks for your help @bhavitvyamalik :)\r\nMaybe let's start with `gzip` ? I think it's the most common use case, then if we're fine with it we can add other compression methods"
] |
https://api.github.com/repos/huggingface/datasets/issues/5782 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5782/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5782/comments | https://api.github.com/repos/huggingface/datasets/issues/5782/events | https://github.com/huggingface/datasets/issues/5782 | 1,679,622,367 | I_kwDODunzps5kHQDf | 5,782 | Support for various audio-loading backends instead of always relying on SoundFile | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 3 | 2023-04-22T17:09:25Z | 2023-05-10T20:23:04Z | 2023-05-10T20:23:04Z | null | ### Feature request
Introduce an option to select from a variety of audio-loading backends rather than solely relying on the SoundFile library. For instance, if the ffmpeg library is installed, it can serve as a fallback loading option.
### Motivation
- The SoundFile library, used in [features/audio.py](https://github.com/huggingface/datasets/blob/649d5a3315f9e7666713b6affe318ee00c7163a0/src/datasets/features/audio.py#L185), supports only a [limited number of audio formats](https://pysoundfile.readthedocs.io/en/latest/index.html?highlight=supported#soundfile.available_formats).
- However, current methods for creating audio datasets permit the inclusion of audio files in formats not supported by SoundFile.
- As a result, developers may potentially create a dataset they cannot read back.
In my most recent project, I dealt with phone call recordings in `.amr` or `.gsm` formats and was genuinely surprised when I couldn't read the dataset I had just packaged a minute prior. Nonetheless, I can still accurately read these files using the librosa library, which employs the audioread library that internally leverages ffmpeg to read such files.
Example:
```python
audio_dataset_amr = Dataset.from_dict({"audio": ["audio_samples/audio.amr"]}).cast_column("audio", Audio())
audio_dataset_amr.save_to_disk("audio_dataset_amr")
audio_dataset_amr = Dataset.load_from_disk("audio_dataset_amr")
print(audio_dataset_amr[0])
```
Results in:
```
Traceback (most recent call last):
...
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f316323e4d0>: Format not recognised.
```
While I acknowledge that support for these rare file types may not be a priority, I believe it's quite unfortunate that it's possible to create an unreadable dataset in this manner.
### Your contribution
I've created a [simple demo repository](https://github.com/BoringDonut/hf-datasets-ffmpeg-audio) that highlights the mentioned issue. It demonstrates how to create an .amr dataset that results in an error when attempting to read it just a few lines later.
Additionally, I've made a [fork with a rudimentary solution](https://github.com/BoringDonut/datasets/blob/fea73a8fbbc8876467c7e6422c9360546c6372d8/src/datasets/features/audio.py#L189) that utilizes ffmpeg to load files not supported by SoundFile.
Here you may see github actions fails to read `.amr` dataset using the version of the current dataset, but will work with the patched version:
- https://github.com/BoringDonut/hf-datasets-ffmpeg-audio/actions/runs/4773780420/jobs/8487063785
- https://github.com/BoringDonut/hf-datasets-ffmpeg-audio/actions/runs/4773780420/jobs/8487063829
As evident from the GitHub action above, this solution resolves the previously mentioned problem.
I'd be happy to create a proper pull request, provide runtime benchmarks and tests if you could offer some guidance on the following:
- Where should I incorporate the ffmpeg (or other backends) code? For example, should I create a new file or simply add a function within the Audio class?
- Is it feasible to pass the audio-loading function as an argument within the current architecture? This would be useful if I know in advance that I'll be reading files not supported by SoundFile.
A few more notes:
- In theory, it's possible to load audio using librosa/audioread since librosa is already expected to be installed. However, librosa [will soon discontinue audioread support](https://github.com/librosa/librosa/blob/aacb4c134002903ae56bbd4b4a330519a5abacc0/librosa/core/audio.py#L227). Moreover, using audioread on its own seems inconvenient because it requires a file [path as input](https://github.com/beetbox/audioread/blob/ff9535df934c48038af7be9617fdebb12078cc07/audioread/__init__.py#L108) and cannot work with bytes already loaded into memory or an open file descriptor (as mentioned in [librosa docs](https://librosa.org/doc/main/generated/librosa.load.html#librosa.load), only SoundFile backend supports an open file descriptor as an input). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5782/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5782/timeline | null | completed | null | null | false | [
"Hi! \r\n\r\nYou can use `set_transform`/`with_transform` to define a custom decoding for audio formats not supported by `soundfile`:\r\n```python\r\naudio_dataset_amr = Dataset.from_dict({\"audio\": [\"audio_samples/audio.amr\"]})\r\n\r\ndef decode_audio(batch):\r\n batch[\"audio\"] = [read_ffmpeg(audio_path) for audio_path in batch[\"audio\"]]\r\n return batch\r\n\r\naudio_dataset_amr.set_transform(decode_amr) \r\n```\r\n\r\nSupporting multiple backends is more work to maintain, but we could consider this if we get more requests such as this one.",
"Could it be put somewhere as an example tip or something?",
"Considering the number of times a custom decoding transform has been suggested as a solution, an example in the [docs](https://huggingface.co/docs/datasets/process#format-transform) would be nice.\r\n\r\ncc @stevhliu "
] |
https://api.github.com/repos/huggingface/datasets/issues/2500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2500/comments | https://api.github.com/repos/huggingface/datasets/issues/2500/events | https://github.com/huggingface/datasets/pull/2500 | 920,471,411 | MDExOlB1bGxSZXF1ZXN0NjY5NjE2MjQ1 | 2,500 | Add load_dataset_builder | [] | closed | false | null | 6 | 2021-06-14T14:27:45Z | 2021-07-09T00:08:16Z | 2021-07-05T10:45:58Z | null | Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself.
TODOs:
- [x] Add docstring and entry in the docs
- [x] Add tests
Closes #2484
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2500/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2500/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2500.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2500",
"merged_at": "2021-07-05T10:45:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2500.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2500"
} | true | [
"Hi @mariosasko, thanks for taking on this issue.\r\n\r\nJust a few logistic suggestions, as you are one of our most active contributors ❤️ :\r\n- When you start working on an issue, you can self-assign it to you by commenting on the issue page with the keyword: `#self-assign`; we have implemented a GitHub Action to take care of that... 😉 \r\n- When you are still working on your Pull Request, instead of using the `[WIP]` in the PR name, you can instead create a *draft* pull request: use the drop-down (on the right of the *Create Pull Request* button) and select **Create Draft Pull Request**, then click **Draft Pull Request**.\r\n\r\nI hope you find these hints useful. 🤗 ",
"@albertvillanova Thanks for the tips. When creating this PR, it slipped my mind that this should be a draft. GH has an option to convert already created PRs to draft PRs, but this requires write access for the repo, so maybe you can help.",
"Ready for the review!\r\n\r\nOne additional change. I've modified the `camelcase_to_snakecase`/`snakecase_to_camelcase` conversion functions to fix conversion of the names with 2 or more underscores (e.g. `camelcase_to_snakecase(\"__DummyDataset__\")` would return `___dummy_dataset__`; notice one extra underscore at the beginning). The implementation is based on the [inflection](https://pypi.org/project/inflection/) library.\r\n",
"Thank you for adding this feature, @mariosasko - this is really awesome!\r\n\r\nTried with:\r\n```\r\npython -c \"from datasets import load_dataset_builder; b = load_dataset_builder('openwebtext-10k'); print(b.cache_dir)\"\r\nUsing the latest cached version of the module from /home/stas/.cache/huggingface/modules/datasets_modules/datasets\r\n/openwebtext-10k/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b (last modified on Wed May 12 \r\n20:22:53 2021) \r\n\r\nsince it couldn't be found locally at openwebtext-10k/openwebtext-10k.py \r\n\r\nor remotely (FileNotFoundError).\r\n\r\n/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\r\n```\r\n\r\nThe logger message (edited by me to add new lines to point the issues out) is a bit confusing to the user - that is what does `FileNotFoundError` refer to? \r\n\r\n1. May be replace `FileNotFoundError` with where it was looking for a file online. But then the remote file is there - it's found \r\n2. I'm not sure why it says \"since it couldn't be found locally\" - as it is locally found at the cache folder and again what does \" locally at openwebtext-10k/openwebtext-10k.py\" mean - i.e. where does it look for it? Is it `./openwebtext-10k/openwebtext-10k.py` it's looking for? or in some specific dir?\r\n\r\nIf the cached version always supersedes any other versions perhaps this is what it should say?\r\n```\r\nfound cached version at xxx, not looking for a local at yyy, not downloading remote at zzz\r\n```",
"Hi ! Thanks for the comments\r\n\r\nRegarding your last message:\r\nYou must pass `stas/openwebtext-10k` as in `load_dataset` instead of `openwebtext-10k`. Otherwise it doesn't know how to retrieve the builder from the HF Hub.\r\n\r\nWhen you specify a dataset name without a slash, it tries to load a canonical dataset or it looks locally at ./openwebtext-10k/openwebtext-10k.py\r\nHere since `openwebtext-10k` is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.\r\nAs a fallback it managed to find the dataset script in your cache and it used this one.",
"Oh, I see, so I actually used an incorrect input. so it was a user error. Correcting it:\r\n\r\n```\r\npython -c \"from datasets import load_dataset_builder; b = load_dataset_builder('stas/openwebtext-10k'); print(b.cache_dir)\"\r\n/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\r\n```\r\n\r\nNow there is no logger message. Got it!\r\n\r\nOK, I'm not sure the magical recovery it did in first place is most beneficial in the long run. I'd have rather it failed and said: \"incorrect input there is no such dataset as 'openwebtext-10k' at <this path> or <this url>\" - because if it doesn't fail I may leave it in the code and it'll fail later when another user tries to use my code and won't have the cache. Does it make sense? Giving me `this url` allows me to go to the datasets hub and realize that the dataset is missing the username qualifier.\r\n\r\n> Here since openwebtext-10k is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.\r\n\r\nExcept it slapped the exception name to ` remotely (FileNotFoundError).` which makes no sense.\r\n\r\nPlus for the local it's not clear where is it looking relatively too when it gets `FileNotFoundError` - perhaps it'd help to use absolute path and use it in the message?\r\n\r\n---------------\r\n\r\nFinally, the logger format is not set up so the user gets a warning w/o knowing it's a warning. As you can see it's missing the WARNING pre-amble in https://github.com/huggingface/datasets/pull/2500#issuecomment-874250500\r\n\r\ni.e. I had no idea it was warning me of something, I was just trying to make sense of the message that's why I started the discussion and otherwise I'd have completely missed the point of me making an error."
] |
https://api.github.com/repos/huggingface/datasets/issues/83 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/83/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/83/comments | https://api.github.com/repos/huggingface/datasets/issues/83/events | https://github.com/huggingface/datasets/pull/83 | 616,863,601 | MDExOlB1bGxSZXF1ZXN0NDE2ODkyOTUz | 83 | New datasets | [] | closed | false | null | 0 | 2020-05-12T18:22:27Z | 2020-05-12T18:22:47Z | 2020-05-12T18:22:45Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/83/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/83/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/83.diff",
"html_url": "https://github.com/huggingface/datasets/pull/83",
"merged_at": "2020-05-12T18:22:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/83.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/83"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/81 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/81/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/81/comments | https://api.github.com/repos/huggingface/datasets/issues/81/events | https://github.com/huggingface/datasets/pull/81 | 616,793,010 | MDExOlB1bGxSZXF1ZXN0NDE2ODM1NzE1 | 81 | add tests | [] | closed | false | null | 0 | 2020-05-12T16:28:19Z | 2020-05-13T07:43:57Z | 2020-05-13T07:43:56Z | null | Tests for py_utils functions and for the BaseReader used to read from arrow and parquet.
I also removed unused utils functions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/81/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/81/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/81.diff",
"html_url": "https://github.com/huggingface/datasets/pull/81",
"merged_at": "2020-05-13T07:43:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/81.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/81"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2372/comments | https://api.github.com/repos/huggingface/datasets/issues/2372/events | https://github.com/huggingface/datasets/pull/2372 | 894,496,064 | MDExOlB1bGxSZXF1ZXN0NjQ2ODUxODc2 | 2,372 | ConvQuestions benchmark added | [] | closed | false | null | 3 | 2021-05-18T15:16:50Z | 2021-05-26T10:31:45Z | 2021-05-26T10:31:45Z | null | Hello,
I would like to integrate our dataset on conversational QA. The answers are grounded in the KG.
The work was published in CIKM 2019 (https://dl.acm.org/doi/10.1145/3357384.3358016).
We hope for further research on how to deal with the challenges of factoid conversational QA.
Thanks! :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2372/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2372/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2372.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2372",
"merged_at": "2021-05-26T10:31:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2372.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2372"
} | true | [
"Thanks for your helpful comments and suggestions! :)\r\nI integrated the additional fields, and extended some of the README/dataset card.\r\nAnd I actually realized that we had the cc-by-4.0 for the dataset, so this was also changed.",
"I added the answers to the test set actually :)",
"Oh great ! Let me revert my change then"
] |
https://api.github.com/repos/huggingface/datasets/issues/424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/424/comments | https://api.github.com/repos/huggingface/datasets/issues/424/events | https://github.com/huggingface/datasets/pull/424 | 663,858,552 | MDExOlB1bGxSZXF1ZXN0NDU1MTk4MTY0 | 424 | Web of science | [] | closed | false | null | 0 | 2020-07-22T15:38:31Z | 2020-07-23T14:27:58Z | 2020-07-23T14:27:56Z | null | this PR adds the WebofScience dataset
#353 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/424/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/424/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/424.diff",
"html_url": "https://github.com/huggingface/datasets/pull/424",
"merged_at": "2020-07-23T14:27:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/424.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/424"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5580/comments | https://api.github.com/repos/huggingface/datasets/issues/5580/events | https://github.com/huggingface/datasets/pull/5580 | 1,600,431,792 | PR_kwDODunzps5Kys1c | 5,580 | Support cloud storage in load_dataset via fsspec | [] | closed | false | null | 6 | 2023-02-27T04:06:05Z | 2023-03-11T01:02:49Z | 2023-03-11T00:55:40Z | null | Closes https://github.com/huggingface/datasets/issues/5281
This PR uses fsspec to support datasets on cloud storage (tested manually with GCS). ETags are currently unsupported for cloud storage. In general, a much larger refactor could be done to just use fsspec for all schemes (ftp, http/s, s3, gcs) to unify the interfaces here, but I ultimately opted to leave that out of this PR
I didn't create a GCS filesystem class in `datasets.filesystems` since the S3 one appears to be a wrapper around `s3fs.S3FileSystem` and mainly used to generate docs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5580/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5580.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5580",
"merged_at": "2023-03-11T00:55:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5580.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5580"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Regarding the tests I think it should be possible to use the mockfs fixture, it allows to play with a dummy fsspec FileSystem with the \"mock://\" protocol.\r\n\r\n> However it requires some storage_options to be passed. Maybe it can be added to DownloadConfig which is passed to cached_path, so that fsspec_get and fsspec_head can use the user's storage_options ?\r\n\r\n@lhoestq I went ahead and tested this with a patch so that I could assign the mockfs as a return value. Let me know if I'm missing something though and we need to pass storage_options down",
"> Instead of patching think it would be better to have a new filesystem TmpDirFileSystem (tmpfs) that doesn't need storage_options for the tests, and that is based on a temporary directory created just for the fixture. Maybe something like this ?\r\n\r\nThanks for the recommendation, this works great.",
"Feel free to merge `main` into your PR to fix the CI :)",
"Should be good to go. Thanks!",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006183 / 0.011353 (-0.005170) | 0.004180 / 0.011008 (-0.006829) | 0.095965 / 0.038508 (0.057457) | 0.026754 / 0.023109 (0.003645) | 0.339724 / 0.275898 (0.063826) | 0.381628 / 0.323480 (0.058149) | 0.004615 / 0.007986 (-0.003371) | 0.004469 / 0.004328 (0.000140) | 0.074035 / 0.004250 (0.069784) | 0.035089 / 0.037052 (-0.001963) | 0.352253 / 0.258489 (0.093764) | 0.389598 / 0.293841 (0.095757) | 0.032262 / 0.128546 (-0.096285) | 0.011392 / 0.075646 (-0.064254) | 0.323884 / 0.419271 (-0.095388) | 0.042658 / 0.043533 (-0.000874) | 0.331533 / 0.255139 (0.076394) | 0.364723 / 0.283200 (0.081523) | 0.086349 / 0.141683 (-0.055334) | 1.465687 / 1.452155 (0.013533) | 1.559782 / 1.492716 (0.067066) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198562 / 0.018006 (0.180556) | 0.457170 / 0.000490 (0.456680) | 0.000409 / 0.000200 (0.000209) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022439 / 0.037411 (-0.014973) | 0.096551 / 0.014526 (0.082025) | 0.102230 / 0.176557 (-0.074326) | 0.160878 / 0.737135 (-0.576257) | 0.109348 / 0.296338 (-0.186990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456635 / 0.215209 (0.241426) | 4.563571 / 2.077655 (2.485916) | 2.313048 / 1.504120 (0.808928) | 2.117433 / 1.541195 (0.576239) | 2.127478 / 1.468490 (0.658988) | 0.699478 / 4.584777 (-3.885299) | 3.358955 / 3.745712 (-0.386757) | 1.821437 / 5.269862 (-3.448424) | 1.158239 / 4.565676 (-3.407438) | 0.083207 / 0.424275 (-0.341068) | 0.012925 / 0.007607 (0.005318) | 0.556526 / 0.226044 (0.330482) | 5.552364 / 2.268929 (3.283435) | 2.744696 / 55.444624 (-52.699928) | 2.374455 / 6.876477 (-4.502022) | 2.442021 / 2.142072 (0.299949) | 0.809393 / 4.805227 (-3.995834) | 0.152305 / 6.500664 (-6.348359) | 0.066164 / 0.075469 (-0.009305) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.258268 / 1.841788 (-0.583520) | 13.402391 / 8.074308 (5.328083) | 13.816927 / 10.191392 (3.625535) | 0.148466 / 0.680424 (-0.531958) | 0.016487 / 0.534201 (-0.517714) | 0.385888 / 0.579283 (-0.193395) | 0.378840 / 0.434364 (-0.055524) | 0.444527 / 0.540337 (-0.095810) | 0.531011 / 1.386936 (-0.855925) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006230 / 0.011353 (-0.005123) | 0.004488 / 0.011008 (-0.006520) | 0.077539 / 0.038508 (0.039031) | 0.026611 / 0.023109 (0.003502) | 0.342093 / 0.275898 (0.066195) | 0.371555 / 0.323480 (0.048075) | 0.004665 / 0.007986 (-0.003321) | 0.003289 / 0.004328 (-0.001039) | 0.078378 / 0.004250 (0.074128) | 0.035223 / 0.037052 (-0.001829) | 0.339972 / 0.258489 (0.081483) | 0.378755 / 0.293841 (0.084914) | 0.031331 / 0.128546 (-0.097215) | 0.011406 / 0.075646 (-0.064241) | 0.086891 / 0.419271 (-0.332381) | 0.047713 / 0.043533 (0.004180) | 0.342678 / 0.255139 (0.087539) | 0.364536 / 0.283200 (0.081337) | 0.092132 / 0.141683 (-0.049551) | 1.537050 / 1.452155 (0.084895) | 1.639927 / 1.492716 (0.147211) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.219933 / 0.018006 (0.201927) | 0.391627 / 0.000490 (0.391137) | 0.002238 / 0.000200 (0.002038) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024890 / 0.037411 (-0.012521) | 0.098989 / 0.014526 (0.084464) | 0.104505 / 0.176557 (-0.072052) | 0.156252 / 0.737135 (-0.580884) | 0.108027 / 0.296338 (-0.188312) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443957 / 0.215209 (0.228748) | 4.450850 / 2.077655 (2.373196) | 2.076043 / 1.504120 (0.571923) | 1.866396 / 1.541195 (0.325202) | 1.902692 / 1.468490 (0.434202) | 0.703160 / 4.584777 (-3.881617) | 3.373761 / 3.745712 (-0.371951) | 2.615649 / 5.269862 (-2.654213) | 1.340612 / 4.565676 (-3.225065) | 0.083836 / 0.424275 (-0.340439) | 0.012619 / 0.007607 (0.005012) | 0.553410 / 0.226044 (0.327365) | 5.526500 / 2.268929 (3.257571) | 2.513213 / 55.444624 (-52.931411) | 2.152701 / 6.876477 (-4.723776) | 2.165092 / 2.142072 (0.023019) | 0.818381 / 4.805227 (-3.986846) | 0.152118 / 6.500664 (-6.348546) | 0.066950 / 0.075469 (-0.008519) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291468 / 1.841788 (-0.550320) | 13.694828 / 8.074308 (5.620520) | 13.821019 / 10.191392 (3.629627) | 0.126077 / 0.680424 (-0.554347) | 0.016543 / 0.534201 (-0.517658) | 0.381399 / 0.579283 (-0.197884) | 0.377326 / 0.434364 (-0.057038) | 0.439275 / 0.540337 (-0.101063) | 0.524021 / 1.386936 (-0.862915) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1387/comments | https://api.github.com/repos/huggingface/datasets/issues/1387/events | https://github.com/huggingface/datasets/pull/1387 | 760,368,355 | MDExOlB1bGxSZXF1ZXN0NTM1MjExODQ1 | 1,387 | Add LIAR dataset | [] | closed | false | null | 2 | 2020-12-09T14:16:55Z | 2020-12-14T18:06:43Z | 2020-12-14T16:23:59Z | null | Add LIAR dataset from [“Liar, Liar Pants on Fire”: A New Benchmark Dataset for Fake News Detection](https://www.aclweb.org/anthology/P17-2067/). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1387/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1387/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1387.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1387",
"merged_at": "2020-12-14T16:23:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1387.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1387"
} | true | [
"@lhoestq done! The failing testes don't seem to be related, it seems to be a connection issue, if I understand it correctly.",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/1092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1092/comments | https://api.github.com/repos/huggingface/datasets/issues/1092/events | https://github.com/huggingface/datasets/pull/1092 | 756,913,134 | MDExOlB1bGxSZXF1ZXN0NTMyMzc5MDY0 | 1,092 | Add Coached Conversation Preference Dataset | [] | closed | false | null | 0 | 2020-12-04T08:36:49Z | 2020-12-20T13:34:00Z | 2020-12-04T13:49:50Z | null | Adding [Coached Conversation Preference Dataset](https://research.google/tools/datasets/coached-conversational-preference-elicitation/)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1092/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1092/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1092.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1092",
"merged_at": "2020-12-04T13:49:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1092.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1092"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2865/comments | https://api.github.com/repos/huggingface/datasets/issues/2865/events | https://github.com/huggingface/datasets/pull/2865 | 986,460,698 | MDExOlB1bGxSZXF1ZXN0NzI1NjY1ODgx | 2,865 | Add MultiEURLEX dataset | [] | closed | false | null | 6 | 2021-09-02T09:42:24Z | 2021-09-10T11:50:06Z | 2021-09-10T11:50:06Z | null | **Add new MultiEURLEX Dataset**
MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2865/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2865/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2865.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2865",
"merged_at": "2021-09-10T11:50:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2865.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2865"
} | true | [
"Hi @lhoestq, we have this new cool multilingual dataset coming at EMNLP 2021. It would be really nice if we could have it in Hugging Face asap. Thanks! ",
"Hi @lhoestq, I adopted most of your suggestions:\r\n\r\n- Dummy data files reduced, including the 2 smallest documents per subset JSONL.\r\n- README was updated with the publication URL and instructions on how to download and use label descriptors. Excessive newlines were deleted.\r\n\r\nI would prefer to keep the label list in a pure format (original ids), to enable people to combine those with more information or possibly in the future explore the dataset, find inconsistencies and fix those to release a new version. ",
"Thanks for the changes :)\r\n\r\nRegarding the labels:\r\n\r\nIf you use the ClassLabel feature type, the only change is that it will store the ids as integers instead of (currently) string.\r\nThe advantage is that if people want to know what id corresponds to which label name, they can use `classlabel.int2str`. It is also the format that helps automate model training for classification in `transformers`.\r\n\r\nLet me know if that sounds good to you or if you still want to stick with the labels as they are now.",
"Hey @lhoestq, thanks for providing this information. This sounds great. I updated my code accordingly to use `ClassLabel`. Could you please provide a minimal example of how `classlabel.int2str` works in practice in my case, where labels are a sequence?\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('multi_eurlex', 'all_languages')\r\n# Read strs from the labels (list of integers) for the 1st sample of the training split\r\n```\r\n\r\nI would like to include this in the README file.\r\n\r\nCould you also provide some info on how I could define the supervized key to automate model training, as you said?\r\n\r\nThanks!",
"Thanks for the update :)\r\n\r\nHere is an example of usage:\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('multi_eurlex', 'all_languages', split='train')\r\nclasslabel = dataset.features[\"labels\"].feature\r\nprint(dataset[0][\"labels\"])\r\n# [1, 20, 7, 3, 0]\r\nprint(classlabel.int2str(dataset[0][\"labels\"]))\r\n# ['100160', '100155', '100158', '100147', '100149']\r\n```\r\n\r\nThe ClassLabel is simply used to define the `id2label` dictionary of classification models, to make the ids match between the model and the dataset. There nothing more to do :p \r\n\r\nI think one last thing to do is just update the `dataset_infos.json` file and we'll be good !",
"Everything is ready! 👍 \r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4841/comments | https://api.github.com/repos/huggingface/datasets/issues/4841/events | https://github.com/huggingface/datasets/pull/4841 | 1,337,401,243 | PR_kwDODunzps49Gf0I | 4,841 | Update ted_talks_iwslt license to include ND | [] | closed | false | null | 1 | 2022-08-12T16:14:52Z | 2022-08-14T11:15:22Z | 2022-08-14T11:00:22Z | null | Excerpt from the paper's abstract: "Aside from its cultural and social relevance, this content, which is published under the Creative Commons BY-NC-ND license, also represents a precious language resource for the machine translation research community" | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4841/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4841/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4841.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4841",
"merged_at": "2022-08-14T11:00:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4841.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4841"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/615 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/615/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/615/comments | https://api.github.com/repos/huggingface/datasets/issues/615/events | https://github.com/huggingface/datasets/issues/615 | 699,410,773 | MDU6SXNzdWU2OTk0MTA3NzM= | 615 | Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0 | [] | closed | false | null | 8 | 2020-09-11T14:50:38Z | 2023-07-03T15:19:33Z | 2020-09-19T16:46:31Z | null | How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-381aedc9811b> in <module>
----> 1 wikipedia[[0]]
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key)
1069 format_columns=self._format_columns,
1070 output_all_columns=self._output_all_columns,
-> 1071 format_kwargs=self._format_kwargs,
1072 )
1073
~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs)
1037 )
1038 else:
-> 1039 data_subset = self._data.take(indices_array)
1040
1041 if format_type is not None:
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.take()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/compute.py in take(data, indices, boundscheck)
266 """
267 options = TakeOptions(boundscheck)
--> 268 return call_function('take', [data, indices], options)
269
270
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: offset overflow while concatenating arrays
```
It seems to work fine with small datasets or with pyarrow 0.17.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/615/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/615/timeline | null | completed | null | null | false | [
"Related: https://issues.apache.org/jira/browse/ARROW-9773\r\n\r\nIt's definitely a size thing. I took a smaller dataset with 87000 rows and did:\r\n```\r\nfor i in range(10,1000,20):\r\n table = pa.concat_tables([dset._data]*i)\r\n table.take([0])\r\n```\r\nand it broke at around i=300.\r\n\r\nAlso when `_indices` is not None, this breaks indexing by slice. E.g. `dset.shuffle()[:1]` breaks.\r\n\r\nLuckily so far I haven't seen `_indices.column(0).take` break, which means it doesn't break `select` or anything like that which is where the speed really matters, it's just `_getitem`. So I'm currently working around it by just doing the arrow v0 method in `_getitem`:\r\n```\r\n#if PYARROW_V0:\r\ndata_subset = pa.concat_tables(\r\n self._data.slice(indices_array[i].as_py(), 1) for i in range(len(indices_array))\r\n)\r\n#else:\r\n #data_subset = self._data.take(indices_array)\r\n```",
"Let me know if you meet other offset overflow issues @joeddav ",
"Will this problem be solved in newer version?",
"This specific issue has been fixed in https://github.com/huggingface/datasets/pull/645\r\n\r\nIf you still have this error, could you open a new issue and explain how to reproduce the error ?",
"same error here in version 2.1.0",
"Facing the same issue. \r\nSteps to reproduce: (dataset is a few GB big so try in colab maybe)\r\nDatasets version - 2.11.0\r\n```\r\nimport datasets\r\nimport re\r\n\r\nds = datasets.load_dataset('nishanthc/dnd_map_dataset_v0.1', split = 'train')\r\n\r\ndef get_text_caption(example):\r\n regex_pattern = r'\\s\\d+x\\d+|,\\sLQ|,\\sgrid|\\.\\w+$'\r\n example['text_caption'] = re.sub(regex_pattern, '', example['picture_text'])\r\n return example\r\n\r\nds = ds.map(get_text_caption)\r\n```\r\n\r\nI am trying to apply a regex to remove certain patterns from a text column. Not sure why this error is showing up.",
"Got this error on a very large data set (900m rows, 35 cols) performing a similar batch map operation.",
"There is a solution that has been proposed here: https://github.com/huggingface/datasets/issues/5783"
] |
https://api.github.com/repos/huggingface/datasets/issues/1835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1835/comments | https://api.github.com/repos/huggingface/datasets/issues/1835/events | https://github.com/huggingface/datasets/issues/1835 | 803,524,790 | MDU6SXNzdWU4MDM1MjQ3OTA= | 1,835 | Add CHiME4 dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | open | false | null | 0 | 2021-02-08T12:36:38Z | 2021-02-08T13:13:31Z | null | null | ## Adding a Dataset
- **Name:** Chime4
- **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR
- **Paper:** Dataset comes from a channel: http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/ . Results paper:
- **Data:** http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/download.html
- **Motivation:** So far there are very little datasets for speech in `datasets`. Only `lbirispeech_asr` so far.
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1835/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1835/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2797 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2797/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2797/comments | https://api.github.com/repos/huggingface/datasets/issues/2797/events | https://github.com/huggingface/datasets/issues/2797 | 970,331,634 | MDU6SXNzdWU5NzAzMzE2MzQ= | 2,797 | Make creating/editing dataset cards easier, by editing on site and dumping info from test command. | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 0 | 2021-08-13T11:54:49Z | 2021-08-14T08:42:09Z | null | null | **Is your feature request related to a problem? Please describe.**
Creating and editing dataset cards should be but not that easy
- If other else know Some information I don't know (bias of dataset, dataset curation, supported dataset, ...), he/she should know the description on hf.co comes from README.md under github huggingface/datasets/datasets/the dataset, and willing to make a pr to add or fix information.
- Many information is also saved in `dataset_info.json` (citaion, description), but still need to write it down to README.md again.
- Contributor need to pip install and start a local server just for tagging the dataset's size. And contributor may be creating the dataset on lab's server, which can't open a browser.
- if any one proposes a new tag, it doesn't show in the list that another creator see. (a stackoverflow way may be ideal)
- dataset card generator web app doesn't generate the necessary subsecion `Contributions` for us.
**Describe the solution you'd like**
- Everyone (or at least the author/contributor) can edit the description, information, tags of the dataset, on hf.co website. Just like wikipedia+stackoverflow
- We can infer the actual data size, citation, data instance, ... from `dataset_info.json` and `dataset.arrow` via `dataset-cli test`
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2797/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2797/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/553 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/553/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/553/comments | https://api.github.com/repos/huggingface/datasets/issues/553/events | https://github.com/huggingface/datasets/pull/553 | 690,143,182 | MDExOlB1bGxSZXF1ZXN0NDc3MDgxNTg2 | 553 | [Fix GitHub Actions] test adding tmate | [] | closed | false | null | 0 | 2020-09-01T13:28:03Z | 2021-05-05T18:24:38Z | 2020-09-03T09:01:13Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/553/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/553/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/553.diff",
"html_url": "https://github.com/huggingface/datasets/pull/553",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/553.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/553"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/5897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5897/comments | https://api.github.com/repos/huggingface/datasets/issues/5897/events | https://github.com/huggingface/datasets/pull/5897 | 1,726,135,494 | PR_kwDODunzps5RXJaY | 5,897 | Fix `FixedSizeListArray` casting | [] | closed | false | null | 4 | 2023-05-25T16:26:33Z | 2023-05-26T12:22:04Z | 2023-05-26T11:57:16Z | null | Fix cast on sliced `FixedSizeListArray`s.
Fix #5866 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5897/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5897/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5897.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5897",
"merged_at": "2023-05-26T11:57:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5897.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5897"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006213 / 0.011353 (-0.005140) | 0.004230 / 0.011008 (-0.006778) | 0.098014 / 0.038508 (0.059506) | 0.028659 / 0.023109 (0.005550) | 0.303272 / 0.275898 (0.027374) | 0.337186 / 0.323480 (0.013706) | 0.005126 / 0.007986 (-0.002860) | 0.003563 / 0.004328 (-0.000765) | 0.075295 / 0.004250 (0.071045) | 0.036836 / 0.037052 (-0.000216) | 0.309612 / 0.258489 (0.051123) | 0.346484 / 0.293841 (0.052643) | 0.025714 / 0.128546 (-0.102832) | 0.008562 / 0.075646 (-0.067085) | 0.323475 / 0.419271 (-0.095796) | 0.044072 / 0.043533 (0.000539) | 0.308261 / 0.255139 (0.053122) | 0.330903 / 0.283200 (0.047703) | 0.091805 / 0.141683 (-0.049878) | 1.517011 / 1.452155 (0.064856) | 1.570815 / 1.492716 (0.078099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211265 / 0.018006 (0.193259) | 0.438860 / 0.000490 (0.438370) | 0.001127 / 0.000200 (0.000927) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023337 / 0.037411 (-0.014074) | 0.096243 / 0.014526 (0.081717) | 0.103529 / 0.176557 (-0.073028) | 0.161171 / 0.737135 (-0.575964) | 0.105904 / 0.296338 (-0.190435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417042 / 0.215209 (0.201833) | 4.155067 / 2.077655 (2.077412) | 1.879657 / 1.504120 (0.375537) | 1.669341 / 1.541195 (0.128146) | 1.717623 / 1.468490 (0.249133) | 0.556246 / 4.584777 (-4.028531) | 3.484535 / 3.745712 (-0.261177) | 1.728845 / 5.269862 (-3.541017) | 0.997477 / 4.565676 (-3.568199) | 0.068355 / 0.424275 (-0.355920) | 0.012445 / 0.007607 (0.004837) | 0.519023 / 0.226044 (0.292978) | 5.173506 / 2.268929 (2.904577) | 2.332435 / 55.444624 (-53.112190) | 1.986348 / 6.876477 (-4.890129) | 2.076885 / 2.142072 (-0.065187) | 0.656738 / 4.805227 (-4.148489) | 0.135308 / 6.500664 (-6.365356) | 0.065486 / 0.075469 (-0.009984) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208874 / 1.841788 (-0.632914) | 13.994200 / 8.074308 (5.919892) | 14.160978 / 10.191392 (3.969586) | 0.146009 / 0.680424 (-0.534415) | 0.016573 / 0.534201 (-0.517628) | 0.356082 / 0.579283 (-0.223202) | 0.387766 / 0.434364 (-0.046598) | 0.419130 / 0.540337 (-0.121208) | 0.508634 / 1.386936 (-0.878302) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006238 / 0.011353 (-0.005115) | 0.004221 / 0.011008 (-0.006788) | 0.075155 / 0.038508 (0.036646) | 0.028491 / 0.023109 (0.005382) | 0.355606 / 0.275898 (0.079708) | 0.388986 / 0.323480 (0.065506) | 0.005941 / 0.007986 (-0.002044) | 0.003510 / 0.004328 (-0.000819) | 0.074905 / 0.004250 (0.070655) | 0.039111 / 0.037052 (0.002059) | 0.358492 / 0.258489 (0.100003) | 0.398763 / 0.293841 (0.104922) | 0.025535 / 0.128546 (-0.103012) | 0.008580 / 0.075646 (-0.067067) | 0.080461 / 0.419271 (-0.338811) | 0.041381 / 0.043533 (-0.002152) | 0.355498 / 0.255139 (0.100359) | 0.379163 / 0.283200 (0.095963) | 0.096450 / 0.141683 (-0.045233) | 1.503248 / 1.452155 (0.051093) | 1.595616 / 1.492716 (0.102900) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238065 / 0.018006 (0.220058) | 0.422800 / 0.000490 (0.422311) | 0.002274 / 0.000200 (0.002074) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025746 / 0.037411 (-0.011665) | 0.103319 / 0.014526 (0.088793) | 0.112155 / 0.176557 (-0.064401) | 0.163034 / 0.737135 (-0.574101) | 0.113377 / 0.296338 (-0.182962) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440522 / 0.215209 (0.225313) | 4.398123 / 2.077655 (2.320468) | 2.143538 / 1.504120 (0.639418) | 1.946084 / 1.541195 (0.404890) | 1.996556 / 1.468490 (0.528066) | 0.550108 / 4.584777 (-4.034669) | 3.455774 / 3.745712 (-0.289938) | 2.862474 / 5.269862 (-2.407387) | 1.213446 / 4.565676 (-3.352230) | 0.067987 / 0.424275 (-0.356288) | 0.012413 / 0.007607 (0.004806) | 0.543990 / 0.226044 (0.317945) | 5.454807 / 2.268929 (3.185879) | 2.669195 / 55.444624 (-52.775429) | 2.332948 / 6.876477 (-4.543528) | 2.383870 / 2.142072 (0.241797) | 0.652017 / 4.805227 (-4.153210) | 0.135508 / 6.500664 (-6.365156) | 0.068238 / 0.075469 (-0.007231) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322669 / 1.841788 (-0.519118) | 14.368136 / 8.074308 (6.293828) | 14.167431 / 10.191392 (3.976039) | 0.159371 / 0.680424 (-0.521052) | 0.016638 / 0.534201 (-0.517563) | 0.357106 / 0.579283 (-0.222177) | 0.392491 / 0.434364 (-0.041873) | 0.419458 / 0.540337 (-0.120880) | 0.504662 / 1.386936 (-0.882274) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006296 / 0.011353 (-0.005057) | 0.004185 / 0.011008 (-0.006823) | 0.096170 / 0.038508 (0.057662) | 0.029212 / 0.023109 (0.006102) | 0.315356 / 0.275898 (0.039458) | 0.335214 / 0.323480 (0.011734) | 0.005108 / 0.007986 (-0.002877) | 0.003634 / 0.004328 (-0.000694) | 0.074186 / 0.004250 (0.069936) | 0.038716 / 0.037052 (0.001663) | 0.311041 / 0.258489 (0.052551) | 0.341202 / 0.293841 (0.047361) | 0.025584 / 0.128546 (-0.102962) | 0.008499 / 0.075646 (-0.067148) | 0.318660 / 0.419271 (-0.100611) | 0.043745 / 0.043533 (0.000212) | 0.314824 / 0.255139 (0.059685) | 0.328117 / 0.283200 (0.044917) | 0.093425 / 0.141683 (-0.048258) | 1.478732 / 1.452155 (0.026578) | 1.531743 / 1.492716 (0.039027) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203484 / 0.018006 (0.185478) | 0.416131 / 0.000490 (0.415641) | 0.007352 / 0.000200 (0.007152) | 0.000211 / 0.000054 (0.000156) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022908 / 0.037411 (-0.014503) | 0.098641 / 0.014526 (0.084115) | 0.103426 / 0.176557 (-0.073131) | 0.161658 / 0.737135 (-0.575477) | 0.106506 / 0.296338 (-0.189832) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430781 / 0.215209 (0.215572) | 4.315677 / 2.077655 (2.238022) | 2.022302 / 1.504120 (0.518182) | 1.832043 / 1.541195 (0.290849) | 1.789302 / 1.468490 (0.320812) | 0.560484 / 4.584777 (-4.024293) | 3.448204 / 3.745712 (-0.297508) | 1.725016 / 5.269862 (-3.544846) | 1.002649 / 4.565676 (-3.563027) | 0.068480 / 0.424275 (-0.355795) | 0.012617 / 0.007607 (0.005010) | 0.532291 / 0.226044 (0.306246) | 5.319352 / 2.268929 (3.050423) | 2.520730 / 55.444624 (-52.923894) | 2.213881 / 6.876477 (-4.662596) | 2.352477 / 2.142072 (0.210404) | 0.662516 / 4.805227 (-4.142711) | 0.136481 / 6.500664 (-6.364183) | 0.066597 / 0.075469 (-0.008872) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224537 / 1.841788 (-0.617251) | 13.849920 / 8.074308 (5.775612) | 14.026358 / 10.191392 (3.834966) | 0.131018 / 0.680424 (-0.549405) | 0.016756 / 0.534201 (-0.517445) | 0.358091 / 0.579283 (-0.221192) | 0.397709 / 0.434364 (-0.036655) | 0.450024 / 0.540337 (-0.090314) | 0.542609 / 1.386936 (-0.844327) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006179 / 0.011353 (-0.005174) | 0.004145 / 0.011008 (-0.006863) | 0.077482 / 0.038508 (0.038974) | 0.028005 / 0.023109 (0.004896) | 0.400010 / 0.275898 (0.124112) | 0.408206 / 0.323480 (0.084726) | 0.005049 / 0.007986 (-0.002937) | 0.003608 / 0.004328 (-0.000721) | 0.076841 / 0.004250 (0.072590) | 0.036714 / 0.037052 (-0.000338) | 0.406020 / 0.258489 (0.147531) | 0.412392 / 0.293841 (0.118551) | 0.025626 / 0.128546 (-0.102920) | 0.008560 / 0.075646 (-0.067087) | 0.084088 / 0.419271 (-0.335183) | 0.039707 / 0.043533 (-0.003826) | 0.396909 / 0.255139 (0.141770) | 0.403623 / 0.283200 (0.120424) | 0.095137 / 0.141683 (-0.046546) | 1.515670 / 1.452155 (0.063515) | 1.568379 / 1.492716 (0.075662) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.181802 / 0.018006 (0.163795) | 0.408778 / 0.000490 (0.408289) | 0.000393 / 0.000200 (0.000193) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025940 / 0.037411 (-0.011471) | 0.099992 / 0.014526 (0.085466) | 0.106280 / 0.176557 (-0.070276) | 0.161729 / 0.737135 (-0.575406) | 0.108625 / 0.296338 (-0.187713) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.459802 / 0.215209 (0.244593) | 4.603002 / 2.077655 (2.525347) | 2.406851 / 1.504120 (0.902732) | 2.265422 / 1.541195 (0.724227) | 2.306305 / 1.468490 (0.837815) | 0.553903 / 4.584777 (-4.030874) | 3.482052 / 3.745712 (-0.263660) | 2.969855 / 5.269862 (-2.300007) | 1.309285 / 4.565676 (-3.256391) | 0.068130 / 0.424275 (-0.356145) | 0.012189 / 0.007607 (0.004582) | 0.571299 / 0.226044 (0.345254) | 5.711420 / 2.268929 (3.442492) | 2.716748 / 55.444624 (-52.727876) | 2.369869 / 6.876477 (-4.506608) | 2.544240 / 2.142072 (0.402167) | 0.659955 / 4.805227 (-4.145272) | 0.136684 / 6.500664 (-6.363980) | 0.068962 / 0.075469 (-0.006507) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.297659 / 1.841788 (-0.544129) | 14.012758 / 8.074308 (5.938449) | 14.324644 / 10.191392 (4.133252) | 0.144894 / 0.680424 (-0.535530) | 0.016751 / 0.534201 (-0.517450) | 0.361547 / 0.579283 (-0.217736) | 0.396595 / 0.434364 (-0.037769) | 0.422375 / 0.540337 (-0.117962) | 0.508209 / 1.386936 (-0.878727) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006303 / 0.011353 (-0.005050) | 0.004043 / 0.011008 (-0.006965) | 0.096239 / 0.038508 (0.057731) | 0.029608 / 0.023109 (0.006498) | 0.321058 / 0.275898 (0.045160) | 0.367066 / 0.323480 (0.043587) | 0.005236 / 0.007986 (-0.002749) | 0.003342 / 0.004328 (-0.000987) | 0.074407 / 0.004250 (0.070157) | 0.038810 / 0.037052 (0.001757) | 0.332597 / 0.258489 (0.074108) | 0.363562 / 0.293841 (0.069721) | 0.025460 / 0.128546 (-0.103086) | 0.008426 / 0.075646 (-0.067221) | 0.316998 / 0.419271 (-0.102273) | 0.043621 / 0.043533 (0.000088) | 0.338043 / 0.255139 (0.082904) | 0.366441 / 0.283200 (0.083241) | 0.092061 / 0.141683 (-0.049622) | 1.461531 / 1.452155 (0.009376) | 1.538047 / 1.492716 (0.045331) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206796 / 0.018006 (0.188790) | 0.517959 / 0.000490 (0.517469) | 0.002745 / 0.000200 (0.002545) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022902 / 0.037411 (-0.014510) | 0.097901 / 0.014526 (0.083375) | 0.103664 / 0.176557 (-0.072893) | 0.163516 / 0.737135 (-0.573619) | 0.108561 / 0.296338 (-0.187778) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418964 / 0.215209 (0.203755) | 4.159113 / 2.077655 (2.081458) | 1.843946 / 1.504120 (0.339827) | 1.641083 / 1.541195 (0.099888) | 1.686848 / 1.468490 (0.218358) | 0.554583 / 4.584777 (-4.030194) | 3.409862 / 3.745712 (-0.335850) | 2.647904 / 5.269862 (-2.621958) | 1.355424 / 4.565676 (-3.210253) | 0.068229 / 0.424275 (-0.356046) | 0.012217 / 0.007607 (0.004610) | 0.515895 / 0.226044 (0.289851) | 5.144920 / 2.268929 (2.875991) | 2.298046 / 55.444624 (-53.146579) | 1.964735 / 6.876477 (-4.911741) | 2.075580 / 2.142072 (-0.066492) | 0.657104 / 4.805227 (-4.148123) | 0.134759 / 6.500664 (-6.365905) | 0.067545 / 0.075469 (-0.007924) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.233075 / 1.841788 (-0.608713) | 13.896762 / 8.074308 (5.822454) | 14.055143 / 10.191392 (3.863751) | 0.145507 / 0.680424 (-0.534917) | 0.016702 / 0.534201 (-0.517499) | 0.365157 / 0.579283 (-0.214126) | 0.385842 / 0.434364 (-0.048522) | 0.459993 / 0.540337 (-0.080344) | 0.547115 / 1.386936 (-0.839821) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006174 / 0.011353 (-0.005179) | 0.004191 / 0.011008 (-0.006817) | 0.078311 / 0.038508 (0.039803) | 0.028038 / 0.023109 (0.004928) | 0.360056 / 0.275898 (0.084158) | 0.398081 / 0.323480 (0.074602) | 0.005069 / 0.007986 (-0.002916) | 0.003464 / 0.004328 (-0.000864) | 0.077858 / 0.004250 (0.073608) | 0.039420 / 0.037052 (0.002367) | 0.361743 / 0.258489 (0.103254) | 0.404829 / 0.293841 (0.110988) | 0.025604 / 0.128546 (-0.102943) | 0.008573 / 0.075646 (-0.067074) | 0.084944 / 0.419271 (-0.334328) | 0.042652 / 0.043533 (-0.000881) | 0.368549 / 0.255139 (0.113410) | 0.385682 / 0.283200 (0.102482) | 0.099085 / 0.141683 (-0.042598) | 1.495815 / 1.452155 (0.043661) | 1.548168 / 1.492716 (0.055452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.193737 / 0.018006 (0.175730) | 0.421871 / 0.000490 (0.421381) | 0.002306 / 0.000200 (0.002106) | 0.000073 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025928 / 0.037411 (-0.011483) | 0.103410 / 0.014526 (0.088885) | 0.107931 / 0.176557 (-0.068626) | 0.157127 / 0.737135 (-0.580008) | 0.111892 / 0.296338 (-0.184446) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.477562 / 0.215209 (0.262353) | 4.772711 / 2.077655 (2.695056) | 2.458725 / 1.504120 (0.954605) | 2.269871 / 1.541195 (0.728676) | 2.365502 / 1.468490 (0.897012) | 0.556182 / 4.584777 (-4.028595) | 3.408016 / 3.745712 (-0.337697) | 1.730639 / 5.269862 (-3.539222) | 1.000973 / 4.565676 (-3.564704) | 0.068293 / 0.424275 (-0.355982) | 0.012119 / 0.007607 (0.004512) | 0.581281 / 0.226044 (0.355236) | 5.811930 / 2.268929 (3.543001) | 2.890337 / 55.444624 (-52.554288) | 2.592156 / 6.876477 (-4.284321) | 2.687764 / 2.142072 (0.545691) | 0.664282 / 4.805227 (-4.140946) | 0.136029 / 6.500664 (-6.364635) | 0.067493 / 0.075469 (-0.007976) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330723 / 1.841788 (-0.511064) | 14.379172 / 8.074308 (6.304864) | 14.153286 / 10.191392 (3.961894) | 0.142942 / 0.680424 (-0.537482) | 0.016698 / 0.534201 (-0.517503) | 0.361044 / 0.579283 (-0.218239) | 0.393174 / 0.434364 (-0.041190) | 0.423107 / 0.540337 (-0.117231) | 0.514299 / 1.386936 (-0.872637) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4809/comments | https://api.github.com/repos/huggingface/datasets/issues/4809/events | https://github.com/huggingface/datasets/pull/4809 | 1,332,842,747 | PR_kwDODunzps483Y4h | 4,809 | Complete the mlqa dataset card | [] | closed | false | null | 4 | 2022-08-09T07:38:06Z | 2022-08-09T16:26:21Z | 2022-08-09T13:26:43Z | null | I fixed the issue #4808
Details of PR:
- Added languages included in the dataset.
- Added task id and task category.
- Updated the citation information.
Fix #4808. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4809/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4809/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4809.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4809",
"merged_at": "2022-08-09T13:26:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4809.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4809"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for your contribution, @eldhoittangeorge.\r\n> \r\n> The CI error message: https://github.com/huggingface/datasets/runs/7743526624?check_suite_focus=true\r\n> \r\n> ```\r\n> E ValueError: The following issues have been found in the dataset cards:\r\n> E YAML tags:\r\n> E __init__() missing 5 required positional arguments: 'annotations_creators', 'language_creators', 'license', 'size_categories', and 'source_datasets'\r\n> ```\r\n\r\nI will fix the CI error.",
"@eldhoittangeorge, thanks again for all the fixes. Just a minor one before we can merge this PR: https://github.com/huggingface/datasets/runs/7744885754?check_suite_focus=true\r\n```\r\nE YAML tags:\r\nE Could not validate the metadata, found the following errors:\r\nE * field 'language_creators':\r\nE \t['unknown'] are not registered tags for 'language_creators', reference at https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/creators.json\r\n```",
"> \r\n\r\nThanks, I updated the file. \r\nA small suggestion can you mention this link https://github.com/huggingface/datasets/tree/main/src/datasets/utils/resources/ in the contribution page. So that others will know the acceptable values for the tags."
] |
https://api.github.com/repos/huggingface/datasets/issues/4345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4345/comments | https://api.github.com/repos/huggingface/datasets/issues/4345/events | https://github.com/huggingface/datasets/pull/4345 | 1,235,062,787 | PR_kwDODunzps43xrky | 4,345 | Fix never ending GH Action to build documentation | [] | closed | false | null | 1 | 2022-05-13T10:40:10Z | 2022-05-13T11:29:43Z | 2022-05-13T11:22:00Z | null | There was an unclosed code block introduced by:
- #4313
https://github.com/huggingface/datasets/pull/4313/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R538
This causes the "Make documentation" step in the "Build documentation" workflow to never finish.
- I think this issue should also be addressed in the `doc-builder` lib.
Fix #4346. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4345/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4345/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4345.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4345",
"merged_at": "2022-05-13T11:22:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4345.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4345"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/307/comments | https://api.github.com/repos/huggingface/datasets/issues/307/events | https://github.com/huggingface/datasets/issues/307 | 644,187,262 | MDU6SXNzdWU2NDQxODcyNjI= | 307 | Specify encoding for MRPC | [] | closed | false | null | 0 | 2020-06-23T22:24:49Z | 2020-06-25T12:16:09Z | 2020-06-25T12:16:09Z | null | Same as #242, but with MRPC: on Windows, I get a `UnicodeDecodeError` when I try to download the dataset:
```python
dataset = nlp.load_dataset('glue', 'mrpc')
```
```python
Downloading and preparing dataset glue/mrpc (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\Python\.cache\huggingface\datasets\glue\mrpc\1.0.0...
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in incomplete_dir(dirname)
369 try:
--> 370 yield tmp_dir
371 if os.path.isdir(dirname):
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
430 verify_infos = not save_infos and not ignore_verifications
--> 431 self._download_and_prepare(
432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
482 # Prepare split will record examples associated to the split
--> 483 self._prepare_split(split_generator, **prepare_split_kwargs)
484 except OSError:
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _prepare_split(self, split_generator)
663 generator = self._generate_examples(**split_generator.gen_kwargs)
--> 664 for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False):
665 example = self.info.features.encode_example(record)
~\Miniconda3\envs\nlp\lib\site-packages\tqdm\notebook.py in __iter__(self, *args, **kwargs)
217 try:
--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
219 # return super(tqdm...) will not catch exception
~\Miniconda3\envs\nlp\lib\site-packages\tqdm\std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:
1130 yield obj
~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_examples(self, data_file, split, mrpc_files)
514 examples = self._generate_example_mrpc_files(mrpc_files=mrpc_files, split=split)
--> 515 for example in examples:
516 yield example["idx"], example
~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\glue\7fc58099eb3983a04c8dac8500b70d27e6eceae63ffb40d7900c977897bb58c6\glue.py in _generate_example_mrpc_files(self, mrpc_files, split)
576 reader = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE)
--> 577 for n, row in enumerate(reader):
578 is_row_in_dev = [row["#1 ID"], row["#2 ID"]] in dev_ids
~\Miniconda3\envs\nlp\lib\csv.py in __next__(self)
110 self.fieldnames
--> 111 row = next(self.reader)
112 self.line_num = self.reader.line_num
~\Miniconda3\envs\nlp\lib\encodings\cp1252.py in decode(self, input, final)
22 def decode(self, input, final=False):
---> 23 return codecs.charmap_decode(input,self.errors,decoding_table)[0]
24
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1180: character maps to <undefined>
```
The fix is the same: specify `utf-8` encoding when opening the file. The previous fix didn't work as MRPC's download process is different from the others in GLUE.
I am going to propose a new PR :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/307/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/307/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4554/comments | https://api.github.com/repos/huggingface/datasets/issues/4554/events | https://github.com/huggingface/datasets/pull/4554 | 1,283,369,453 | PR_kwDODunzps46Sv_f | 4,554 | Fix WMT dataset loading issue and docs update (Re-opened) | [] | closed | false | null | 1 | 2022-06-24T07:26:16Z | 2022-07-08T15:39:20Z | 2022-07-08T15:27:44Z | null | This PR is a fix for #4354
Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets.
Let me know, if any additional changes are required.
Thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4554/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4554/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4554.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4554",
"merged_at": "2022-07-08T15:27:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4554.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4554"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3379 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3379/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3379/comments | https://api.github.com/repos/huggingface/datasets/issues/3379/events | https://github.com/huggingface/datasets/pull/3379 | 1,071,079,146 | PR_kwDODunzps4vYr7K | 3,379 | iter_archive on zipfiles with better compression type check | [] | closed | false | null | 10 | 2021-12-04T01:04:48Z | 2023-01-24T13:00:19Z | 2023-01-24T12:53:08Z | null | Hello @lhoestq , thank you for your detailed answer on previous PR !
I made this new PR because I misused git on the previous one #3347.
Related issue #3272.
# Comments :
* For extension check I used the `_get_extraction_protocol` function in **download_manager.py** with a slight change and called it `_get_extraction_protocol_local`:
**I removed this part :**
```python
elif path.endswith(".tar.gz") or path.endswith(".tgz"):
raise NotImplementedError(
f"Extraction protocol for TAR archives like '{urlpath}' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead."
)
```
**And also changed :**
```diff
- extension = path.split(".")[-1]
+ extension = "tar" if path.endswith(".tar.gz") else path.split(".")[-1]
```
The reason for this is a compression like **.tar.gz** will be considered a **.gz** which is handled with **zipfile**, though **tar.gz** can only be opened using **tarfile**.
Please tell me if there's anything to change.
# Tasks :
- [x] download_manager.py
- [x] streaming_download_manager.py | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3379/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3379/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3379.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3379",
"merged_at": "2023-01-24T12:53:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3379.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3379"
} | true | [
"Hello @lhoestq, thank you for your answer.\r\n\r\nI don't use pytest a lot so I think I might need some help on it :) but I tried some tests for `streaming_download_manager.py` only. I don't know how to test `download_manager.py` since we need to use local files.\r\n\r\n# Comments : \r\n* In **download_manager.py** I removed some unnecessary imports after the simplification of `_get_extraction_protocol_local`.\r\n* In **streaming_download_manager** I moved the raised Error as suggested.\r\n \r\n### I also started some tests on `StreamingDownloadManager()` :\r\n* Used an existing zipfile url and added a new one that has a folder and many files : \r\n```python\r\nTEST_GG_DRIVE_ZIPPED_URL = \"https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh\"\r\nTEST_GG_DRIVE2_ZIPPED_URL = \"https://drive.google.com/uc?export=download&id=1X4jyUBBbShyCRfD-vCO1ZvfqFXP3NEeU\"\r\n``` \r\n* **For now is being tested :**\r\n * Return type of the function : should be tuple\r\n * Files names\r\n * Files content\r\n * Added an `xfail` test for the gzip file, because I get a `zipfile.BadZipFile exception`.\r\n\r\n\r\n * And lastly, changed the test for `_get_extraction_protocol_throws` since it was moved to `_extract` : \r\n ```diff\r\[email protected](raises=NotImplementedError)\r\ndef test_streaming_dl_manager_get_extraction_protocol_throws(urlpath):\r\n- _get_extraction_protocol(urlpath)\r\n\r\[email protected](raises=NotImplementedError)\r\ndef test_streaming_dl_manager_get_extraction_protocol_throws(urlpath):\r\n+ StreamingDownloadManager()._extract(urlpath)\r\n```\r\n\r\n\r\n",
"Hello,\r\nIn this Commit was taken into account all the comment escept the `test_download _manager.py`.\r\nI will work on that for the next commit.\r\n\r\nSorry again for being inactive lately in this PR.\r\n\r\n",
"thanks a lot ! This CI seems to have import errors now though ?",
"> thanks a lot ! This CI seems to have import errors now though ?\r\n\r\nYes sorry about that, it's due to a cyclic import I didn't pay attention to.\r\n\r\nWill fix that in the next Commit along with adding the tests to download_manager.\r\n\r\n",
"Hello @Mehdi2402, are you still interested in working on this further? ",
"> Hello @Mehdi2402, are you still interested in working on this further?\r\n\r\nHello @albertvillanova, yes I would like to resume work on this.",
"Great, we would like to have this feature.\r\n\r\nFirst, you should resolve the conflicts with the main branch, by merging main into your feature branch and then fixing the conflicts by hand. Let us know if you would need some help on this: we can resolve the conflicts for you, so that you can continue your contribution afterwards.",
"_The documentation is not available anymore as the PR was closed or merged._",
"I refactored the code to make this PR ready for the final review.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009475 / 0.011353 (-0.001878) | 0.005249 / 0.011008 (-0.005759) | 0.099713 / 0.038508 (0.061205) | 0.036328 / 0.023109 (0.013219) | 0.295955 / 0.275898 (0.020057) | 0.368779 / 0.323480 (0.045299) | 0.007796 / 0.007986 (-0.000190) | 0.005635 / 0.004328 (0.001306) | 0.077351 / 0.004250 (0.073100) | 0.045290 / 0.037052 (0.008238) | 0.306634 / 0.258489 (0.048145) | 0.345025 / 0.293841 (0.051184) | 0.038241 / 0.128546 (-0.090306) | 0.012338 / 0.075646 (-0.063308) | 0.335184 / 0.419271 (-0.084088) | 0.047737 / 0.043533 (0.004204) | 0.295092 / 0.255139 (0.039953) | 0.319810 / 0.283200 (0.036610) | 0.102777 / 0.141683 (-0.038906) | 1.399444 / 1.452155 (-0.052711) | 1.450239 / 1.492716 (-0.042478) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.202919 / 0.018006 (0.184912) | 0.447493 / 0.000490 (0.447003) | 0.004187 / 0.000200 (0.003987) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028570 / 0.037411 (-0.008841) | 0.113536 / 0.014526 (0.099010) | 0.120525 / 0.176557 (-0.056031) | 0.162732 / 0.737135 (-0.574404) | 0.130195 / 0.296338 (-0.166144) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408831 / 0.215209 (0.193622) | 4.094929 / 2.077655 (2.017274) | 1.810356 / 1.504120 (0.306236) | 1.618532 / 1.541195 (0.077337) | 1.681310 / 1.468490 (0.212820) | 0.705157 / 4.584777 (-3.879620) | 3.789040 / 3.745712 (0.043327) | 2.121842 / 5.269862 (-3.148020) | 1.522505 / 4.565676 (-3.043171) | 0.085443 / 0.424275 (-0.338832) | 0.012065 / 0.007607 (0.004458) | 0.521176 / 0.226044 (0.295132) | 5.201899 / 2.268929 (2.932970) | 2.303055 / 55.444624 (-53.141569) | 1.971721 / 6.876477 (-4.904756) | 2.053827 / 2.142072 (-0.088245) | 0.864810 / 4.805227 (-3.940418) | 0.168040 / 6.500664 (-6.332624) | 0.063332 / 0.075469 (-0.012138) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208105 / 1.841788 (-0.633683) | 14.722757 / 8.074308 (6.648449) | 14.396695 / 10.191392 (4.205303) | 0.152702 / 0.680424 (-0.527722) | 0.028828 / 0.534201 (-0.505373) | 0.439573 / 0.579283 (-0.139710) | 0.438891 / 0.434364 (0.004527) | 0.509043 / 0.540337 (-0.031295) | 0.603531 / 1.386936 (-0.783405) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007337 / 0.011353 (-0.004016) | 0.005080 / 0.011008 (-0.005929) | 0.097916 / 0.038508 (0.059408) | 0.032722 / 0.023109 (0.009612) | 0.338925 / 0.275898 (0.063027) | 0.372945 / 0.323480 (0.049465) | 0.005464 / 0.007986 (-0.002522) | 0.004031 / 0.004328 (-0.000297) | 0.076761 / 0.004250 (0.072511) | 0.046804 / 0.037052 (0.009752) | 0.336088 / 0.258489 (0.077599) | 0.403704 / 0.293841 (0.109863) | 0.036928 / 0.128546 (-0.091618) | 0.012204 / 0.075646 (-0.063442) | 0.335467 / 0.419271 (-0.083804) | 0.049158 / 0.043533 (0.005625) | 0.342040 / 0.255139 (0.086901) | 0.356729 / 0.283200 (0.073530) | 0.101280 / 0.141683 (-0.040403) | 1.432540 / 1.452155 (-0.019614) | 1.545228 / 1.492716 (0.052512) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226003 / 0.018006 (0.207997) | 0.445601 / 0.000490 (0.445112) | 0.000408 / 0.000200 (0.000208) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028861 / 0.037411 (-0.008551) | 0.112083 / 0.014526 (0.097557) | 0.130843 / 0.176557 (-0.045713) | 0.159275 / 0.737135 (-0.577861) | 0.127582 / 0.296338 (-0.168756) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446357 / 0.215209 (0.231148) | 4.448568 / 2.077655 (2.370914) | 2.197861 / 1.504120 (0.693741) | 2.004675 / 1.541195 (0.463480) | 2.052082 / 1.468490 (0.583592) | 0.710770 / 4.584777 (-3.874007) | 3.868936 / 3.745712 (0.123224) | 2.095008 / 5.269862 (-3.174854) | 1.363064 / 4.565676 (-3.202613) | 0.086734 / 0.424275 (-0.337541) | 0.012272 / 0.007607 (0.004665) | 0.546378 / 0.226044 (0.320334) | 5.475189 / 2.268929 (3.206260) | 2.702742 / 55.444624 (-52.741882) | 2.335880 / 6.876477 (-4.540597) | 2.396194 / 2.142072 (0.254121) | 0.856249 / 4.805227 (-3.948978) | 0.170466 / 6.500664 (-6.330198) | 0.063585 / 0.075469 (-0.011884) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.236981 / 1.841788 (-0.604807) | 15.046616 / 8.074308 (6.972307) | 14.551781 / 10.191392 (4.360389) | 0.144485 / 0.680424 (-0.535939) | 0.017774 / 0.534201 (-0.516427) | 0.446274 / 0.579283 (-0.133010) | 0.436871 / 0.434364 (0.002507) | 0.504503 / 0.540337 (-0.035834) | 0.602014 / 1.386936 (-0.784922) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2684 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2684/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2684/comments | https://api.github.com/repos/huggingface/datasets/issues/2684/events | https://github.com/huggingface/datasets/pull/2684 | 948,771,753 | MDExOlB1bGxSZXF1ZXN0NjkzNTY0MDY4 | 2,684 | Print absolute local paths in load_dataset error messages | [] | closed | false | null | 0 | 2021-07-20T15:28:28Z | 2021-07-22T20:48:19Z | 2021-07-22T14:01:10Z | null | Use absolute local paths in the error messages of `load_dataset` as per @stas00's suggestion in https://github.com/huggingface/datasets/pull/2500#issuecomment-874891223 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2684/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2684/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2684.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2684",
"merged_at": "2021-07-22T14:01:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2684.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2684"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/928/comments | https://api.github.com/repos/huggingface/datasets/issues/928/events | https://github.com/huggingface/datasets/pull/928 | 753,722,324 | MDExOlB1bGxSZXF1ZXN0NTI5NzQ1OTIx | 928 | Add the Multilingual Amazon Reviews Corpus | [] | closed | false | null | 0 | 2020-11-30T18:58:06Z | 2020-12-01T16:04:30Z | 2020-12-01T16:04:27Z | null | - **Name:** Multilingual Amazon Reviews Corpus* (`amazon_reviews_multi`)
- **Description:** A collection of Amazon reviews in English, Japanese, German, French, Spanish and Chinese.
- **Paper:** https://arxiv.org/abs/2010.02573
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/928/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/928/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/928.diff",
"html_url": "https://github.com/huggingface/datasets/pull/928",
"merged_at": "2020-12-01T16:04:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/928.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/928"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4658/comments | https://api.github.com/repos/huggingface/datasets/issues/4658/events | https://github.com/huggingface/datasets/issues/4658 | 1,297,001,390 | I_kwDODunzps5NTquu | 4,658 | Transfer CI tests to GitHub Actions | [] | closed | false | null | 0 | 2022-07-07T08:10:50Z | 2022-07-12T11:18:25Z | 2022-07-12T11:18:25Z | null | Let's try CI tests using GitHub Actions to see if they are more stable than on CircleCI. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4658/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4658/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5060/comments | https://api.github.com/repos/huggingface/datasets/issues/5060/events | https://github.com/huggingface/datasets/issues/5060 | 1,395,382,940 | I_kwDODunzps5TK9qc | 5,060 | Unable to Use Custom Dataset Locally | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2022-10-03T21:55:16Z | 2022-10-06T14:29:18Z | 2022-10-06T14:29:17Z | null | ## Describe the bug
I have uploaded a [dataset](https://huggingface.co/datasets/zpn/pubchem_selfies) and followed the instructions from the [dataset_loader](https://huggingface.co/docs/datasets/dataset_script#download-data-files-and-organize-splits) tutorial. In that tutorial, it says
```
If the data files live in the same folder or repository of the dataset script,
you can just pass the relative paths to the files instead of URLs.
```
Accordingly, I put the [relative path](https://huggingface.co/datasets/zpn/pubchem_selfies/blob/main/pubchem_selfies.py#L76) to the data to be used. I was able to test the dataset and generate the metadata locally with `datasets-cli test path/to/<your-dataset-loading-script> --save_infos --all_configs`
However, if I try to load the data using `load_dataset`, I get the following error
```
with gzip.open(filepath, mode="rt") as f:
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz'
```
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("zpn/pubchem_selfies", streaming=True)
>>> t = dataset["train"]
>>> for item in t:
...... print(item)
...... break
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 723, in __iter__
for key, example in self._iter():
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 713, in _iter
yield from ex_iterable
File "/Users/zachnussbaum/env/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 113, in __iter__
yield from self.generate_examples_fn(**self.kwargs)
File "/Users/zachnussbaum/.cache/huggingface/modules/datasets_modules/datasets/zpn--pubchem_selfies/d2571f35996765aea70fd3f3f8e3882d59c401fb738615c79282e2eb1d9f7a25/pubchem_selfies.py", line 475, in _generate_examples
with gzip.open(filepath, mode="rt") as f:
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 58, in open
binary_file = GzipFile(filename, gz_mode, compresslevel)
File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/gzip.py", line 173, in __init__
fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')
FileNotFoundError: [Errno 2] No such file or directory: 'https://huggingface.co/datasets/zpn/pubchem_selfies/resolve/main/data/Compound_021000001_021500000/Compound_021000001_021500000_SELFIES.jsonl.gz'
````
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.1
- Platform: macOS-12.5.1-x86_64-i386-64bit
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5060/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5060/timeline | null | completed | null | null | false | [
"Hi ! I opened a PR in your repo to fix this :)\r\nhttps://huggingface.co/datasets/zpn/pubchem_selfies/discussions/7\r\n\r\nbasically you need to use `open` for streaming to work properly",
"Thank you so much for this! Naive question, is this a feature of `open` or have you all overloaded it to be able to read from a URL? Any links to code/documentation would be greatly appreciated, I'd love to learn more",
"`datasets` extends `open` in dataset scripts to work with URLs. The builtin `open` from python only works with local files.\r\n\r\nYou can find the extension here: https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/download/streaming_download_manager.py#L435-L451\r\n\r\nI think we can create a docs section dedicated to streaming to explain how this works",
"Closing this one - feel free to reopen if you have more questions"
] |
https://api.github.com/repos/huggingface/datasets/issues/1766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1766/comments | https://api.github.com/repos/huggingface/datasets/issues/1766/events | https://github.com/huggingface/datasets/issues/1766 | 792,044,105 | MDU6SXNzdWU3OTIwNDQxMDU= | 1,766 | Issues when run two programs compute the same metrics | [] | closed | false | null | 2 | 2021-01-22T14:22:55Z | 2021-02-02T10:38:06Z | 2021-02-02T10:38:06Z | null | I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches:
```
File "train_matching_min.py", line 160, in <module>ch_9_label
avg_loss = valid(epoch, args.batch, args.validation, args.with_label)
File "train_matching_min.py", line 93, in valid
bleu += eval.compute()
File "/u/tlhoang/projects/seal/match/models/eval.py", line 23, in compute
return self.metric.compute()['score']
File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/metric.py", line 387, in compute
self._finalize()
File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/metric.py", line 355, in _finalize
self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths]))
File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/arrow_reader.py", line 231, in read_files
pa_table = self._read_files(files)
File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/arrow_reader.py", line 170, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
File "/dccstor/know/anaconda3/lib/python3.7/site-packages/datasets/arrow_reader.py", line 299, in _get_dataset_from_filename
pa_table = f.read_all()
File "pyarrow/ipc.pxi", line 481, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Expected to read 1819307375 metadata bytes, but only read 454396
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1766/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1766/timeline | null | completed | null | null | false | [
"Hi ! To avoid collisions you can specify a `experiment_id` when instantiating your metric using `load_metric`. It will replace \"default_experiment\" with the experiment id that you provide in the arrow filename. \r\n\r\nAlso when two `experiment_id` collide we're supposed to detect it using our locking mechanism. Not sure why it didn't work in your case. Could you share some code that reproduces the issue ? This would help us investigate.",
"Thank you for your response. I fixed the issue by set \"keep_in_memory=True\" when load_metric. \r\nI cannot share the entire source code but below is the wrapper I wrote:\r\n\r\n```python\r\nclass Evaluation:\r\n def __init__(self, metric='sacrebleu'):\r\n # self.metric = load_metric(metric, keep_in_memory=True)\r\n self.metric = load_metric(metric)\r\n\r\n def add(self, predictions, references):\r\n self.metric.add_batch(predictions=predictions, references=references)\r\n\r\n def compute(self):\r\n return self.metric.compute()['score']\r\n```\r\n\r\nThen call the given wrapper as follows:\r\n\r\n```python\r\neval = Evaluation(metric='sacrebleu')\r\nfor query, candidates, labels in tqdm(dataset):\r\n predictions = net.generate(query)\r\n references = [[s] for s in labels]\r\n eval.add(predictions, references)\r\n if n % 100 == 0:\r\n bleu += eval.compute()\r\n eval = Evaluation(metric='sacrebleu')"
] |
https://api.github.com/repos/huggingface/datasets/issues/390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/390/comments | https://api.github.com/repos/huggingface/datasets/issues/390/events | https://github.com/huggingface/datasets/pull/390 | 656,956,384 | MDExOlB1bGxSZXF1ZXN0NDQ5MTYxMzY3 | 390 | Concatenate datasets | [] | closed | false | null | 6 | 2020-07-14T23:24:37Z | 2020-07-22T09:49:58Z | 2020-07-22T09:49:58Z | null | I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions.
Usage:
```python
from nlp import Dataset, load_dataset
data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]}
dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2)
dset_concat = Dataset.from_concat([dset1, dset2])
print(dset_concat)
# Dataset(schema: {'id': 'int64'}, num_rows: 6)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/390/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/390/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/390.diff",
"html_url": "https://github.com/huggingface/datasets/pull/390",
"merged_at": "2020-07-22T09:49:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/390.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/390"
} | true | [
"Looks cool :)\r\n\r\nI feel like \r\n```python\r\nconcatenated_dataset = dataset1.concatenate(dataset2)\r\n```\r\ncould be more natural. What do you think ?\r\n\r\nAlso could you also concatenate the `nlp.Dataset._data_files` ?\r\n```python\r\nreturn cls(table, info=info, split=split, data_files=self._data_files + other_dataset._data_files)\r\n```",
"I feel like \"WikiBooks\" would be a multi task dataset that could fit in the #217 discussion.\r\nNot sure concatenate should be the solution for a multi task dataset.",
"Thanks for the suggestion! `dset1.concatenate(dset2)` does feel more natural. Although this seems to be a different \"class\" of transformation function than map() or filter(), acting on two datasets rather than on one. I would prefer the function signature treat both datasets symmetrically.\r\n\r\nPython lists have `list1 + list2` or `list1.extend(list2)`.\r\nNumPy has `np.concatenate((arr1, arr2))`.\r\nPandas has `pd.join((df1, df2))`.\r\nPyTorch has `ConcatDataset((dset1, dset2))`.\r\n\r\nGiven the symmetrical treatment and clear communication that this creates a new object, rather than a simple chaining on the first, my preference is now for `nlp.concatenate((dset1, dset2))`. This would place the function in the same API class as `nlp.load_dataset`. Does that work?",
"The multi-task discussion is interesting, thanks for pointing me to that! I'll be focusing on T5 in a few weeks, so I'm sure I'll have many opinions then :). For now, I think a simple concatenate feature is important and orthogonal to that discussion. For example, a user may want to create a custom dataset that joins Wikipedia with their own custom text.",
"> Given the symmetrical treatment and clear communication that this creates a new object, rather than a simple chaining on the first, my preference is now for `nlp.concatenate((dset1, dset2))`. This would place the function in the same API class as `nlp.load_dataset`. Does that work?\r\n\r\nYep I like this idea. Maybe `nlp.concatenate_datasets()` ?\r\n\r\n> For now, I think a simple concatenate feature is important and orthogonal to that discussion. For example, a user may want to create a custom dataset that joins Wikipedia with their own custom text.\r\n\r\nI agree :)",
"Great, just updated!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3456/comments | https://api.github.com/repos/huggingface/datasets/issues/3456/events | https://github.com/huggingface/datasets/pull/3456 | 1,084,687,973 | PR_kwDODunzps4wEwXz | 3,456 | [WER] Better error message for wer | [] | closed | false | null | 4 | 2021-12-20T11:38:40Z | 2021-12-20T16:53:37Z | 2021-12-20T16:53:36Z | null | Currently we have the following problem when using the WER. When the input format to the WER metric is wrong, instead of throwing an error message a word-error-rate is computed which is incorrect. E.g. when doing the following:
```python
from datasets import load_metric
wer = load_metric("wer")
target_str = ["hello this is nice", "hello the weather is bloomy"]
pred_str = [["hello it's nice"], ["hello it's the weather"]]
print("Wrong:", wer.compute(predictions=pred_str, references=target_str))
print("Correct", wer.compute(predictions=[x[0] for x in pred_str], references=target_str))
```
We get:
```
Wrong: 1.0
Correct 0.5555555555555556
```
meaning that we get a word-error rate for incorrectly passed input formats. We should raise an error here instead so that people don't spent hours fixing a model while it's their incorrect evaluation metric is the problem for a low WER. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3456/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3456/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3456.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3456",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3456.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3456"
} | true | [
"Hi ! I don't think this would solve this issue.\r\nCurrently it looks like there's a bug that converts the list `[\"hello it's nice\"]` to a string `'[\"hello it's nice\"]'` since this is what the metric expects as input. The conversion is done before the data are passed to `_compute()`.\r\n\r\nThis is `Value(\"string\").encode_example` that is called to do the conversion. Since `str()` encoding is too permissive we should consider raising an error if the example is not a string (even though it can be converted to string). ",
"> called\r\n\r\nAh yeah you're right",
"I just opened https://github.com/huggingface/datasets/pull/3460 to fix that. It now raises an error instead of computing the wrong WER",
"Thank you - that should be good enough!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3497/comments | https://api.github.com/repos/huggingface/datasets/issues/3497/events | https://github.com/huggingface/datasets/issues/3497 | 1,090,050,148 | I_kwDODunzps5A-Nhk | 3,497 | Changing sampling rate in audio dataset and subsequently mapping with `num_proc > 1` leads to weird bug | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-12-28T18:03:49Z | 2022-01-21T13:22:27Z | 2022-01-21T13:22:27Z | null | Running:
```python
from datasets import load_dataset, DatasetDict
import datasets
from transformers import AutoFeatureExtractor
raw_datasets = DatasetDict()
raw_datasets["train"] = load_dataset("common_voice", "ab", split="train")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
raw_datasets = raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate)
)
num_workers = 16
def prepare_dataset(batch):
sample = batch["audio"]
inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
return batch
raw_datasets.map(
prepare_dataset,
remove_columns=next(iter(raw_datasets.values())).column_names,
num_proc=16,
desc="preprocess datasets",
)
```
gives
```bash
File "/home/patrick/experiments/run_bug.py", line 25, in <module>
raw_datasets.map(
File "/home/patrick/python_bin/datasets/dataset_dict.py", line 492, in map
{
File "/home/patrick/python_bin/datasets/dataset_dict.py", line 493, in <dictcomp>
k: dataset.map(
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2139, in map
shards = [
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2140, in <listcomp>
self.shard(num_shards=num_proc, index=rank, contiguous=True, keep_in_memory=keep_in_memory)
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 3164, in shard
return self.select(
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 485, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/patrick/python_bin/datasets/fingerprint.py", line 411, in wrapper
out = func(self, *args, **kwargs)
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2756, in select
return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint)
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2667, in _new_dataset_with_indices
return Dataset(
File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 659, in __init__
raise ValueError(
ValueError: External features info don't match the dataset:
Got
{'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)}
with type
struct<client_id: string, path: string, audio: string, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string>
but expected something like
{'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)}
with type
struct<client_id: string, path: string, audio: struct<path: string, bytes: binary>, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string>
```
Versions:
```python
- `datasets` version: 1.16.2.dev0
- Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 6.0.1
```
and `transformers`:
```
- `transformers` version: 4.16.0.dev0
- Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33
- Python version: 3.9.7
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3497/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3497/timeline | null | completed | null | null | false | [
"Same error occures when using max samples with https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py",
"I'm seeing this too, when using preprocessing_num_workers with \r\nhttps://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py"
] |
https://api.github.com/repos/huggingface/datasets/issues/2014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2014/comments | https://api.github.com/repos/huggingface/datasets/issues/2014/events | https://github.com/huggingface/datasets/pull/2014 | 825,916,531 | MDExOlB1bGxSZXF1ZXN0NTg3OTY1NDg3 | 2,014 | more explicit method parameters | [] | closed | false | null | 0 | 2021-03-09T13:18:29Z | 2021-03-10T10:08:37Z | 2021-03-10T10:08:36Z | null | re: #2009
not super convinced this is better, and while I usually fight against kwargs here it seems to me that it better conveys the relationship to the `_split_generator` method. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2014/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2014/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2014.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2014",
"merged_at": "2021-03-10T10:08:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2014.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2014"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/972/comments | https://api.github.com/repos/huggingface/datasets/issues/972/events | https://github.com/huggingface/datasets/pull/972 | 754,787,314 | MDExOlB1bGxSZXF1ZXN0NTMwNjI0NTI3 | 972 | Add Children's Book Test (CBT) dataset | [] | closed | false | null | 2 | 2020-12-01T22:53:26Z | 2021-03-19T11:30:03Z | 2021-03-19T11:30:03Z | null | Add the Children's Book Test (CBT) from Facebook (Hill et al. 2016).
Sentence completion given a few sentences as context from a children's book. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/972/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/972/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/972.diff",
"html_url": "https://github.com/huggingface/datasets/pull/972",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/972.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/972"
} | true | [
"Hi @lhoestq,\r\n\r\nI guess this PR can be closed since we merged #2044?\r\n\r\nI have used the same link for the homepage, as it is where the dataset is provided, hope that is okay?",
"Closing in favor of #2044, thanks again :)\r\n\r\n> I have used the same link for the homepage, as it is where the dataset is provided, hope that is okay?\r\n\r\nYea it's ok actually, at that time I thought there was another homepage for this dataset"
] |
https://api.github.com/repos/huggingface/datasets/issues/1335 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1335/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1335/comments | https://api.github.com/repos/huggingface/datasets/issues/1335/events | https://github.com/huggingface/datasets/pull/1335 | 759,705,835 | MDExOlB1bGxSZXF1ZXN0NTM0NjYzNzQ2 | 1,335 | Added Bianet dataset | [] | closed | false | null | 1 | 2020-12-08T19:10:32Z | 2020-12-14T10:00:56Z | 2020-12-14T10:00:56Z | null | Hi :hugs:, This is a PR for [Bianet: A parallel news corpus in Turkish, Kurdish and English; Source](http://opus.nlpl.eu/Bianet.php) dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1335/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1335/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1335.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1335",
"merged_at": "2020-12-14T10:00:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1335.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1335"
} | true | [
"merging since the Ci is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/4710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4710/comments | https://api.github.com/repos/huggingface/datasets/issues/4710/events | https://github.com/huggingface/datasets/pull/4710 | 1,308,958,525 | PR_kwDODunzps47ny0L | 4,710 | Add object detection processing tutorial | [] | closed | false | null | 3 | 2022-07-19T04:23:46Z | 2022-07-21T20:10:35Z | 2022-07-21T19:56:42Z | null | The following adds a quick guide on how to process object detection datasets with `albumentations`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4710/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4710/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4710.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4710",
"merged_at": "2022-07-21T19:56:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4710.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4710"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Great idea! Now that we have more than one task, it makes sense to separate image classification and object detection so it'll be easier for users to follow.",
"@lhoestq do we want to do that in this PR, or should we merge it and let @stevhliu reorganize separately? "
] |
https://api.github.com/repos/huggingface/datasets/issues/235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/235/comments | https://api.github.com/repos/huggingface/datasets/issues/235/events | https://github.com/huggingface/datasets/pull/235 | 630,952,297 | MDExOlB1bGxSZXF1ZXN0NDI3OTM1MjQ0 | 235 | Add experimental datasets | [] | closed | false | null | 6 | 2020-06-04T15:54:56Z | 2020-06-12T15:38:55Z | 2020-06-12T15:38:55Z | null | ## Adding an *experimental datasets* folder
After using the 🤗nlp library for some time, I find that while it makes it super easy to create new memory-mapped datasets with lots of cool utilities, a lot of what I want to do doesn't work well with the current `MockDownloader` based testing paradigm, making it hard to share my work with the community.
My suggestion would be to add a **datasets\_experimental** folder so we can start making these new datasets public without having to completely re-think testing for every single one. We would allow contributors to submit dataset PRs in this folder, but require an explanation for why the current testing suite doesn't work for them. We can then aggregate the feedback and periodically see what's missing from the current tests.
I have added a **datasets\_experimental** folder to the repository and S3 bucket with two initial datasets: ELI5 (explainlikeimfive) and a Wikipedia Snippets dataset to support indexing (wiki\_snippets)
### ELI5
#### Dataset description
This allows people to download the [ELI5: Long Form Question Answering](https://arxiv.org/abs/1907.09190) dataset, along with two variants based on the r/askscience and r/AskHistorians. Full Reddit dumps for each month are downloaded from [pushshift](https://files.pushshift.io/reddit/), filtered for submissions and comments from the desired subreddits, then deleted one at a time to save space. The resulting dataset is split into a training, validation, and test dataset for r/explainlikeimfive, r/askscience, and r/AskHistorians respectively, where each item is a question along with all of its high scoring answers.
#### Issues with the current testing
1. the list of files to be downloaded is not pre-defined, but rather determined by parsing an index web page at run time. This is necessary as the name and compression type of the dump files changes from month to month as the pushshift website is maintained. Currently, the dummy folder requires the user to know which files will be downloaded.
2. to save time, the script works on the compressed files using the corresponding python packages rather than first running `download\_and\_extract` then filtering the extracted files.
### Wikipedia Snippets
#### Dataset description
This script creates a *snippets* version of a source Wikipedia dataset: each article is split into passages of fixed length which can then be indexed using ElasticSearch or a dense indexer. The script currently handles all **wikipedia** and **wiki40b** source datasets, and allows the user to choose the passage length and how much overlap they want across passages. In addition to the passage text, each snippet also has the article title, list of titles of sections covered by the text, and information to map the passage back to the initial dataset at the paragraph and character level.
#### Issues with the current testing
1. The DatasetBuilder needs to call `nlp.load_dataset()`. Currently, testing is not recursive (the test doesn't know where to find the dummy data for the source dataset)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/235/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/235/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/235.diff",
"html_url": "https://github.com/huggingface/datasets/pull/235",
"merged_at": "2020-06-12T15:38:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/235.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/235"
} | true | [
"I think it would be nicer to not create a new folder `datasets_experimental` , but just put your datasets also into the folder `datasets` for the following reasons:\r\n\r\n- From my point of view, the datasets are not very different from the other datasets (assuming that we soon have C4, and the beam datasets) so I don't see why we require a new dataset folder\r\n\r\n- I'm not a big fan of adding a boolean flag to the `load_dataset()` function that basically switches between folder names on S3. The user has to know whether a dataset script is experimental or not. User installing nlp with pip won't see that there are folders called `datasets` and `datasets_experimental`\r\n\r\n- If we do this just to bypass the test, I think a good solution could be: For all tests that are too complicated to be currently tested with the testing framework, we can add a class variable called `do_test = False` to the dataset builder class and a default `do_test = True` to the abstract dataset class and skip all tests that have that variable in the dataset test framework similar to what is done to beam datasets: https://github.com/huggingface/nlp/blob/2e0a8639a79b1abc848cff5c669094d40bba0f63/tests/test_dataset_common.py#L79 \r\nWe can also print a warning for all dataset tests having `do_test = False`. This variable would only concern testing and we would not have a problem removing it at a later stage IMO.\r\n\r\n- This way the datascripts are backward compatible and can be used with earlier versions of `nlp` (not that this matters too much atm) \r\n\r\nWhat is your opinion on this @lhoestq @thomwolf ?",
"Very cool to have add those datasets :)\r\nI understand that making the dummy data for this case is not fun. I'm sure we'll be able to add them soon. For now it's still interesting to have them in the library, even if we can't test all the code with dummy data.\r\n\r\nI like the idea of the `do_tests=False` class variable. \r\nHowever it would be cool to test at least that we can load the module and instantiate the builder (only ignore the dummy data test for now). In that case a better name could be `test_dummy_data=False` or something like that.\r\n\r\nIf we want to be picky we can also add a warning in `_download_and_prepare` to tell the user that datasets with `test_dummy_data=False` are still experimental.",
"Yeah I really like the idea of a partial test.\r\n\r\nMy main concern with the class variable is visibility, but having a warning would help with that. Maybe even get the user to agree > \"are you sure you want to go ahead?\"",
"> Very cool to have add those datasets :)\r\n> I understand that making the dummy data for this case is not fun. I'm sure we'll be able to add them soon. For now it's still interesting to have them in the library, even if we can't test all the code with dummy data.\r\n> \r\n> I like the idea of the `do_tests=False` class variable.\r\n> However it would be cool to test at least that we can load the module and instantiate the builder (only ignore the dummy data test for now). In that case a better name could be `test_dummy_data=False` or something like that.\r\n> \r\n> If we want to be picky we can also add a warning in `_download_and_prepare` to tell the user that datasets with `test_dummy_data=False` are still experimental.\r\n\r\n`test_dummy_data=False` sounds good to me!",
"There we go: added a `test_dummy_data` class variable that is `False` by default for the `BeamBasedBuilder` and `True` for everyone else (except the new `explainlikeimfive` and `wiki_snippets`)\r\n\r\nNote that `wiki_snippets` should become obsolete as soon as @lhoestq adds in the `IndexedDataset` class",
"Great! LGTM!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5725/comments | https://api.github.com/repos/huggingface/datasets/issues/5725/events | https://github.com/huggingface/datasets/issues/5725 | 1,660,455,202 | I_kwDODunzps5i-Iki | 5,725 | How to limit the number of examples in dataset, for testing? | [] | closed | false | null | 3 | 2023-04-10T08:41:43Z | 2023-04-21T06:16:24Z | 2023-04-21T06:16:24Z | null | ### Describe the bug
I am using this command:
`data = load_dataset("json", data_files=data_path)`
However, I want to add a parameter, to limit the number of loaded examples to be 10, for development purposes, but can't find this simple parameter.
### Steps to reproduce the bug
In the description.
### Expected behavior
To be able to limit the number of examples
### Environment info
Nothing special | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5725/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5725/timeline | null | completed | null | null | false | [
"Hi! You can use the `nrows` parameter for this:\r\n```python\r\ndata = load_dataset(\"json\", data_files=data_path, nrows=10)\r\n```",
"@mariosasko I get:\r\n\r\n`TypeError: __init__() got an unexpected keyword argument 'nrows'`",
"I misread the format in which the dataset is stored - the `nrows` parameter works for CSV, but not JSON.\r\n\r\nThis means the only option is first to create a DataFrame and then convert it to a Dataset object:\r\n```python\r\nimport pandas as pd\r\nfrom datasets import Dataset\r\n\r\ndf = pd.read_json(data_path, lines=True, nrows=10)\r\nds = Dataset.from_pandas(df)\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/4531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4531/comments | https://api.github.com/repos/huggingface/datasets/issues/4531/events | https://github.com/huggingface/datasets/issues/4531 | 1,277,054,172 | I_kwDODunzps5MHkzc | 4,531 | Dataset Viewer issue for CSV datasets | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 2 | 2022-06-20T14:56:24Z | 2022-06-21T08:28:46Z | 2022-06-21T08:28:27Z | null | ### Link
https://huggingface.co/datasets/scikit-learn/breast-cancer-wisconsin
### Description
I'm populating CSV datasets [here](https://huggingface.co/scikit-learn) but the viewer is not enabled and it looks for a dataset loading script, the datasets aren't on queue as well.
You can replicate the problem by simply uploading any CSV dataset.
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4531/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4531/timeline | null | completed | null | null | false | [
"this should now be fixed",
"Confirmed, it's fixed now. Thanks for reporting, and thanks @coyotte508 for fixing it\r\n\r\n<img width=\"1123\" alt=\"Capture d’écran 2022-06-21 à 10 28 05\" src=\"https://user-images.githubusercontent.com/1676121/174753833-1b453a5a-6a90-4717-bca1-1b5fc6b75e4a.png\">\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2292/comments | https://api.github.com/repos/huggingface/datasets/issues/2292/events | https://github.com/huggingface/datasets/pull/2292 | 871,230,183 | MDExOlB1bGxSZXF1ZXN0NjI2MjgzNTYy | 2,292 | Fixed typo seperate->separate | [] | closed | false | null | 0 | 2021-04-29T16:40:53Z | 2021-04-30T13:29:18Z | 2021-04-30T13:03:12Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2292/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2292.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2292",
"merged_at": "2021-04-30T13:03:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2292.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2292"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/3259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3259/comments | https://api.github.com/repos/huggingface/datasets/issues/3259/events | https://github.com/huggingface/datasets/pull/3259 | 1,052,189,775 | PR_kwDODunzps4ud5W3 | 3,259 | Updating details of IRC disentanglement data | [] | closed | false | null | 1 | 2021-11-12T17:16:58Z | 2021-11-18T17:19:33Z | 2021-11-18T17:19:33Z | null | I was pleasantly surprised to find that someone had already added my dataset to the huggingface library, but some details were missing or incorrect. This PR fixes the documentation. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3259/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3259/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3259.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3259",
"merged_at": "2021-11-18T17:19:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3259.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3259"
} | true | [
"Thank you for the cleanup!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3067/comments | https://api.github.com/repos/huggingface/datasets/issues/3067/events | https://github.com/huggingface/datasets/pull/3067 | 1,024,023,185 | PR_kwDODunzps4tFSCy | 3,067 | add story_cloze | [] | closed | false | null | 4 | 2021-10-12T16:36:53Z | 2021-10-13T13:48:13Z | 2021-10-13T13:48:13Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3067/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3067/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3067.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3067",
"merged_at": "2021-10-13T13:48:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3067.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3067"
} | true | [
"Thanks for pushing this dataset :)\r\n\r\nAccording to the CI, the file `cloze_test_val__spring2016 - cloze_test_ALL_val.csv` is missing in the dummy data zip file (the zip files seem empty). Feel free to add this file with 4-5 lines and it should be good\r\n\r\nAnd you can fix the YAML tags with\r\n```yaml\r\npretty_name: Story Cloze Test\r\n```\r\nand filling the other tags task_categories and task_ids\r\n\r\nIf the dataset doesn exist on paperswithcode, you can just leave\r\n```yaml\r\npaperswithcode_id: null\r\n```",
"@lhoestq can't fix the last test fails.",
"> Thanks @zaidalyafeai, the failing test is due to an issue in the master branch, that has already been fixed.\r\n> \r\n> You can include the fix:\r\n> \r\n> ```\r\n> git checkout add_story_cloze\r\n> git fetch upstream master\r\n> git merge upstream/master\r\n> ```\r\n\r\nThanks @albertvillanova, passed all the tests now. ",
"Thanks Albert, I fixed the suggested comments. This dataset has no train splits, it is only used for evaluation."
] |
https://api.github.com/repos/huggingface/datasets/issues/3249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3249/comments | https://api.github.com/repos/huggingface/datasets/issues/3249/events | https://github.com/huggingface/datasets/pull/3249 | 1,050,193,138 | PR_kwDODunzps4uXeea | 3,249 | Fix streaming for id_newspapers_2018 | [] | closed | false | null | 0 | 2021-11-10T18:55:30Z | 2021-11-12T14:01:32Z | 2021-11-12T14:01:31Z | null | To be compatible with streaming, this dataset must use `dl_manager.iter_archive` since the data are in a .tgz file | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3249/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3249/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3249.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3249",
"merged_at": "2021-11-12T14:01:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3249.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3249"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1545/comments | https://api.github.com/repos/huggingface/datasets/issues/1545/events | https://github.com/huggingface/datasets/pull/1545 | 765,550,283 | MDExOlB1bGxSZXF1ZXN0NTM4OTg3OTY0 | 1,545 | add hrwac | [] | closed | false | null | 1 | 2020-12-13T17:31:54Z | 2020-12-18T13:35:17Z | 2020-12-18T13:35:17Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1545/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1545/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1545.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1545",
"merged_at": "2020-12-18T13:35:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1545.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1545"
} | true | [
"merging since the CI is fixed on master"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/3661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3661/comments | https://api.github.com/repos/huggingface/datasets/issues/3661/events | https://github.com/huggingface/datasets/pull/3661 | 1,121,000,251 | PR_kwDODunzps4x61ad | 3,661 | Remove unnecessary 'r' arg in | [] | closed | false | null | 1 | 2022-02-01T17:29:27Z | 2022-02-07T16:57:27Z | 2022-02-07T16:02:42Z | null | Originally from #3489 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3661/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3661/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3661.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3661",
"merged_at": "2022-02-07T16:02:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3661.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3661"
} | true | [
"The CI failure is only because of the datasets is missing some sections in their cards - we can ignore that since it's unrelated to this PR"
] |
https://api.github.com/repos/huggingface/datasets/issues/4382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4382/comments | https://api.github.com/repos/huggingface/datasets/issues/4382/events | https://github.com/huggingface/datasets/issues/4382 | 1,243,839,783 | I_kwDODunzps5KI30n | 4,382 | First time trying | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 0 | 2022-05-21T02:15:18Z | 2022-05-21T19:20:44Z | 2022-05-21T19:20:44Z | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4382/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4382/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5498/comments | https://api.github.com/repos/huggingface/datasets/issues/5498/events | https://github.com/huggingface/datasets/issues/5498 | 1,568,190,529 | I_kwDODunzps5deLBB | 5,498 | TypeError: 'bool' object is not iterable when filtering a datasets.arrow_dataset.Dataset | [] | closed | false | null | 2 | 2023-02-02T14:46:49Z | 2023-02-04T17:19:37Z | 2023-02-04T17:19:36Z | null | ### Describe the bug
Hi,
Thanks for the amazing work on the library!
**Describe the bug**
I think I might have noticed a small bug in the filter method.
Having loaded a dataset using `load_dataset`, when I try to filter out empty entries with `batched=True`, I get a TypeError.
### Steps to reproduce the bug
```
train_dataset = train_dataset.filter(
function=lambda example: example["image"] is not None,
batched=True,
batch_size=10)
```
Error message:
```
File .../lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs)
476 validate_fingerprint(kwargs[fingerprint_name])
478 # Call actual function
--> 480 out = func(self, *args, **kwargs)
...
-> 5666 indices_array = [i for i, to_keep in zip(indices, mask) if to_keep]
5667 if indices_mapping is not None:
5668 indices_array = pa.array(indices_array, type=pa.uint64())
TypeError: 'bool' object is not iterable
```
**Removing batched=True allows to bypass the issue.**
### Expected behavior
According to the doc, "[batch_size corresponds to the] number of examples per batch provided to function if batched = True", so we shouldn't need to remove the batchd=True arg?
source: https://huggingface.co/docs/datasets/v2.9.0/en/package_reference/main_classes#datasets.Dataset.filter
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31
- Python version: 3.9.10
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5498/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5498/timeline | null | completed | null | null | false | [
"Hi! Instead of a single boolean, your filter function should return an iterable (of booleans) in the batched mode like so:\r\n```python\r\ntrain_dataset = train_dataset.filter(\r\n function=lambda batch: [image is not None for image in batch[\"image\"]], \r\n batched=True,\r\n batch_size=10)\r\n```\r\n\r\nPS: You can make this operation much faster by operating directly on the arrow data to skip the decoding part:\r\n```python\r\ntrain_dataset = train_dataset.with_format(\"arrow\")\r\ntrain_dataset = train_dataset.filter(\r\n function=lambda table: table[\"image\"].is_valid().to_pylist(), \r\n batched=True,\r\n batch_size=100)\r\ntrain_dataset = train_dataset.with_format(None)\r\n```",
"Thank a lot!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3273/comments | https://api.github.com/repos/huggingface/datasets/issues/3273/events | https://github.com/huggingface/datasets/issues/3273 | 1,053,554,038 | I_kwDODunzps4-y_V2 | 3,273 | Respect row ordering when concatenating datasets along axis=1 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-11-15T11:27:14Z | 2021-11-17T15:41:11Z | 2021-11-17T15:41:11Z | null | Currently, there is a bug when concatenating datasets along `axis=1` if more than one dataset has the `_indices` attribute defined. In that scenario, all indices mappings except the first one get ignored.
A minimal reproducible example:
```python
>>> from datasets import Dataset, concatenate_datasets
>>> a = Dataset.from_dict({"a": [30, 20, 10]})
>>> b = Dataset.from_dict({"b": [2, 1, 3]})
>>> d = concatenate_datasets([a.sort("a"), b.sort("b")], axis=1)
>>> print(d[:3]) # expected: {'a': [10, 20, 30], 'b': [1, 2, 3]}
{'a': [10, 20, 30], 'b': [3, 1, 2]}
```
I've noticed the bug while working on #3195. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3273/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3273/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/710/comments | https://api.github.com/repos/huggingface/datasets/issues/710/events | https://github.com/huggingface/datasets/pull/710 | 714,186,999 | MDExOlB1bGxSZXF1ZXN0NDk3MzQ1NjQ0 | 710 | fix README typos/ consistency | [] | closed | false | null | 0 | 2020-10-03T22:20:56Z | 2020-10-17T09:52:45Z | 2020-10-17T09:52:45Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/710/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/710/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/710.diff",
"html_url": "https://github.com/huggingface/datasets/pull/710",
"merged_at": "2020-10-17T09:52:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/710.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/710"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/838/comments | https://api.github.com/repos/huggingface/datasets/issues/838/events | https://github.com/huggingface/datasets/pull/838 | 740,328,382 | MDExOlB1bGxSZXF1ZXN0NTE4ODM0NTE5 | 838 | CNN/Dailymail Dataset Card | [] | closed | false | null | 0 | 2020-11-10T23:56:43Z | 2020-11-25T21:09:51Z | 2020-11-25T21:09:50Z | null | Link to the card page: https://github.com/mcmillanmajora/datasets/tree/cnn_dailymail_card/datasets/cnn_dailymail
One of the questions this dataset brings up is how we want to handle versioning of the cards to mirror versions of the dataset. The different versions of this dataset are used for different tasks (which may not be reflected in the versions that we currently have in the repo?), but it's only the structure that's changing rather than the content in this particular case, at least between versions 2.0.0 and 3.0.0. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/838/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/838/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/838.diff",
"html_url": "https://github.com/huggingface/datasets/pull/838",
"merged_at": "2020-11-25T21:09:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/838.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/838"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1715/comments | https://api.github.com/repos/huggingface/datasets/issues/1715/events | https://github.com/huggingface/datasets/pull/1715 | 782,754,441 | MDExOlB1bGxSZXF1ZXN0NTUyMjM2NDA5 | 1,715 | add Korean intonation-aided intention identification dataset | [] | closed | false | null | 0 | 2021-01-10T06:29:04Z | 2021-09-17T16:54:13Z | 2021-01-12T17:14:33Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1715/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1715/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1715.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1715",
"merged_at": "2021-01-12T17:14:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1715.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1715"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/4967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4967/comments | https://api.github.com/repos/huggingface/datasets/issues/4967/events | https://github.com/huggingface/datasets/pull/4967 | 1,369,092,452 | PR_kwDODunzps4-vbS- | 4,967 | Strip "/" in local dataset path to avoid empty dataset name error | [] | closed | false | null | 2 | 2022-09-11T23:09:16Z | 2022-09-29T10:46:21Z | 2022-09-12T15:30:38Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4967/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4967/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4967.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4967",
"merged_at": "2022-09-12T15:30:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4967.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4967"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool :-)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5962/comments | https://api.github.com/repos/huggingface/datasets/issues/5962/events | https://github.com/huggingface/datasets/issues/5962 | 1,761,589,882 | I_kwDODunzps5o_7p6 | 5,962 | Issue with train_test_split maintaining the same underlying PyArrow Table | [] | open | false | null | 0 | 2023-06-17T02:19:58Z | 2023-06-17T02:19:58Z | null | null | ### Describe the bug
I've been using the train_test_split method in the datasets module to split my HuggingFace Dataset into separate training, validation, and testing subsets. However, I've noticed an issue where the split datasets appear to maintain the same underlying PyArrow Table.
### Steps to reproduce the bug
1. Load any dataset ```dataset = load_dataset("lhoestq/demo1")```
2. Try the next code:
```python
from datasets import Dataset, DatasetDict
train_size = 0.6
split_train = dataset["train"].train_test_split(
train_size=train_size,
)
separate_dataset_dict = DatasetDict({
"train": split_train["train"],
"test": split_train["test"],
})
```
3. The next code ```print(separate_dataset_dict)``` when printing the dataset it gives the indication that they have 3 and 2 rows respectively.
4. But the next code:
```python
print(len(separate_dataset_dict["train"].data['id']))
print(len(separate_dataset_dict["test"].data['id']))
```
Indicates that both tables still have 5 rows.
### Expected behavior
However, I've noticed that train_test_split["train"].data, test_val_split["train"].data, and test_val_split["test"].data are identical, suggesting that they all point to the same underlying PyArrow Table. This means that the split datasets are not independent, as I expected.
I believe this is a bug in the train_test_split implementation, as I would expect this function to return datasets with separate underlying PyArrow Tables. Could you please help me understand if this is expected behavior, or if there's a workaround to create truly independent split datasets?
I would appreciate any assistance with this issue. Thank you.
### Environment info
I tried in Colab:
- `datasets` version: 2.13.0
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1
and my PC:
- `datasets` version: 2.13.0
- Platform: Linux-5.15.107+-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.15.1
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5962/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5962/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5450/comments | https://api.github.com/repos/huggingface/datasets/issues/5450/events | https://github.com/huggingface/datasets/issues/5450 | 1,551,109,365 | I_kwDODunzps5cdAz1 | 5,450 | to_tf_dataset with a TF collator causes bizarrely persistent slowdown | [] | closed | false | null | 7 | 2023-01-20T16:08:37Z | 2023-02-13T14:13:34Z | 2023-02-13T14:13:34Z | null | ### Describe the bug
This will make more sense if you take a look at [a Colab notebook that reproduces this issue.](https://colab.research.google.com/drive/1rxyeciQFWJTI0WrZ5aojp4Ls1ut18fNH?usp=sharing)
Briefly, there are several datasets that, when you iterate over them with `to_tf_dataset` **and** a data collator that returns `tf` tensors, become very slow. We haven't been able to figure this one out - it can be intermittent, and we have no idea what could possibly cause it. The weirdest thing is that **the slowdown affects other attempts to access the underlying dataset**. If you try to iterate over the `tf.data.Dataset`, then interrupt execution, and then try to iterate over the original dataset, the original dataset is now also very slow! This is true even if the dataset format is not set to `tf` - the iteration is slow even though it's not calling TF at all!
There is a simple workaround for this - we can simply get our data collators to return `np` tensors. When we do this, the bug is never triggered and everything is fine. In general, `np` is preferred for this kind of preprocessing work anyway, when the preprocessing is not going to be compiled into a pure `tf.data` pipeline! However, the issue is fascinating, and the TF team were wondering if anyone in datasets (cc @lhoestq @mariosasko) might have an idea of what could cause this.
### Steps to reproduce the bug
Run the attached Colab.
### Expected behavior
The slowdown should go away, or at least not persist after we stop iterating over the `tf.data.Dataset`
### Environment info
The issue occurs on multiple versions of Python and TF, both on local machines and on Colab.
All testing was done using the latest versions of `transformers` and `datasets` from `main` | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5450/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5450/timeline | null | completed | null | null | false | [
"wtf",
"Couldn't find what's causing this, this will need more investigation",
"A possible hint: The function it seems to be spending a lot of time in (when iterating over the original dataset) is `_get_mp` in the PIL JPEG decoder: \r\n\r\n",
"If \"mp\" is multiprocessing, this might suggest some kind of negative interaction between the JPEG decoder and TF's handling of processes/threads. Note that we haven't merged the parallel `to_tf_dataset` PR yet, so it's not caused by that PR!",
"Update: MP isn't multiprocessing at all, it's an internal PIL method for loading metadata from JPEG files. No idea why that would be a bottleneck, but I'll see if a Python profiler can't figure out where the time is actually being spent.",
"After further profiling, the slowdown is in the C methods for JPEG decoding that are included as part of PIL. Because Python profilers can't inspect inside that, I don't have any further information on which lines exactly are responsible for the slowdown or why.\r\n\r\nIn the meantime, I'm going to suggest switching from `return_tensors=\"tf\"` to `return_tensors=\"np\"` in most of our `transformers` code - this generally works better for pre-processing. Two relevant PRs are [here](https://github.com/huggingface/transformers/pull/21266) and [here](https://github.com/huggingface/notebooks/pull/308).",
"Closing this issue as we've done what we can with this one! "
] |
https://api.github.com/repos/huggingface/datasets/issues/220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/220/comments | https://api.github.com/repos/huggingface/datasets/issues/220/events | https://github.com/huggingface/datasets/pull/220 | 627,280,683 | MDExOlB1bGxSZXF1ZXN0NDI1MTEzMzEy | 220 | dataset_arcd | [] | closed | false | null | 2 | 2020-05-29T13:46:50Z | 2020-05-29T14:58:40Z | 2020-05-29T14:57:21Z | null | Added Arabic Reading Comprehension Dataset (ARCD): https://arxiv.org/abs/1906.05394 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/220/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/220/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/220.diff",
"html_url": "https://github.com/huggingface/datasets/pull/220",
"merged_at": "2020-05-29T14:57:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/220.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/220"
} | true | [
"you can rebase from master to fix the CI error :)",
"Awesome !"
] |
https://api.github.com/repos/huggingface/datasets/issues/1721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1721/comments | https://api.github.com/repos/huggingface/datasets/issues/1721/events | https://github.com/huggingface/datasets/pull/1721 | 783,828,428 | MDExOlB1bGxSZXF1ZXN0NTUzMTIyODQ5 | 1,721 | [Scientific papers] Mirror datasets zip | [] | closed | false | null | 4 | 2021-01-12T01:15:40Z | 2021-01-12T11:49:15Z | 2021-01-12T11:41:47Z | null | Datasets were uploading to https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/arxiv-dataset.zip and https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/pubmed-dataset.zip respectively to escape google drive quota and enable faster download. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1721/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1721/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1721.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1721",
"merged_at": "2021-01-12T11:41:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1721.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1721"
} | true | [
"> Nice !\r\n> \r\n> Could you try to reduce the size of the dummy_data.zip files ? they're quite big (300KB)\r\n\r\nYes, I think it might make sense to enhance the tool a tiny bit to prevent this automatically",
"That's the lightest I can make it...it's long-range summarization so a single sample has ~11000 tokens. ",
"Ok thanks :)",
"Awesome good to merge for me :-) "
] |
https://api.github.com/repos/huggingface/datasets/issues/2738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2738/comments | https://api.github.com/repos/huggingface/datasets/issues/2738/events | https://github.com/huggingface/datasets/pull/2738 | 957,517,746 | MDExOlB1bGxSZXF1ZXN0NzAwOTI5NzA4 | 2,738 | Sunbird AI Ugandan low resource language dataset | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 4 | 2021-08-01T15:18:00Z | 2022-10-03T09:37:30Z | 2022-10-03T09:37:30Z | null | Multi-way parallel text corpus of 5 key Ugandan languages for the task of machine translation. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2738/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2738/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2738.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2738",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2738.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2738"
} | true | [
"Hi @ak3ra , have you had a chance to take my comments into account ?\r\n\r\nLet me know if you have questions or if I can help :)",
"@lhoestq Working on this, thanks for the detailed review :) ",
"Hi ! Cool thanks :)\r\nFeel free to merge master into your branch to fix the CI issues\r\n\r\nLet me know if you have questions or if I can help",
"Thanks for your contribution, @ak3ra. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] |
https://api.github.com/repos/huggingface/datasets/issues/890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/890/comments | https://api.github.com/repos/huggingface/datasets/issues/890/events | https://github.com/huggingface/datasets/pull/890 | 751,534,050 | MDExOlB1bGxSZXF1ZXN0NTI4MDI5NjA3 | 890 | Add LER | [] | closed | false | null | 9 | 2020-11-26T11:58:23Z | 2020-12-01T13:33:35Z | 2020-12-01T13:26:16Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/890/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/890/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/890.diff",
"html_url": "https://github.com/huggingface/datasets/pull/890",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/890.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/890"
} | true | [
"Thanks for the comments. I addressed them and pushed again.\r\nWhen I run \"make quality\" I get the following error but I don't know how to resolve it or what the problem ist respectively:\r\nwould reformat /Users/joelniklaus/NextCloud/PhDJoelNiklaus/Code/datasets/datasets/ler/ler.py\r\nOh no! 💥 💔 💥\r\n1 file would be reformatted, 257 files would be left unchanged.\r\nmake: *** [quality] Error 1\r\n",
"Awesome thanks :)\r\nTo automatically format the python files you can run `make style`",
"I did that now. But still getting the following error:\r\nblack --check --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! ✨ 🍰 ✨\r\n258 files would be left unchanged.\r\nisort --check-only tests src benchmarks datasets metrics\r\nflake8 tests src benchmarks datasets metrics\r\ndatasets/ler/ler.py:46:96: W291 trailing whitespace\r\ndatasets/ler/ler.py:47:68: W291 trailing whitespace\r\ndatasets/ler/ler.py:48:102: W291 trailing whitespace\r\ndatasets/ler/ler.py:49:112: W291 trailing whitespace\r\ndatasets/ler/ler.py:50:92: W291 trailing whitespace\r\ndatasets/ler/ler.py:51:116: W291 trailing whitespace\r\ndatasets/ler/ler.py:52:84: W291 trailing whitespace\r\nmake: *** [quality] Error 1\r\n\r\nHowever: When I look at the file I don't see any trailing whitespace",
"maybe a bug with flake8 ? could you try to update it ? which version do you have ?",
"This is my flake8 version: 3.7.9 (mccabe: 0.6.1, pycodestyle: 2.5.0, pyflakes: 2.1.1) CPython 3.8.5 on Darwin\r\n",
"Now I updated to: 3.8.4 (mccabe: 0.6.1, pycodestyle: 2.6.0, pyflakes: 2.2.0) CPython 3.8.5 on Darwin\r\n\r\nAnd now I even get additional errors:\r\nblack --check --line-length 119 --target-version py36 tests src benchmarks datasets metrics\r\nAll done! ✨ 🍰 ✨\r\n258 files would be left unchanged.\r\nisort --check-only tests src benchmarks datasets metrics\r\nflake8 tests src benchmarks datasets metrics\r\ndatasets/polyglot_ner/polyglot_ner.py:123:64: F541 f-string is missing placeholders\r\ndatasets/ler/ler.py:46:96: W291 trailing whitespace\r\ndatasets/ler/ler.py:47:68: W291 trailing whitespace\r\ndatasets/ler/ler.py:48:102: W291 trailing whitespace\r\ndatasets/ler/ler.py:49:112: W291 trailing whitespace\r\ndatasets/ler/ler.py:50:92: W291 trailing whitespace\r\ndatasets/ler/ler.py:51:116: W291 trailing whitespace\r\ndatasets/ler/ler.py:52:84: W291 trailing whitespace\r\ndatasets/math_dataset/math_dataset.py:233:25: E741 ambiguous variable name 'l'\r\nmetrics/coval/coval.py:236:31: F541 f-string is missing placeholders\r\nmake: *** [quality] Error 1\r\n\r\nI do this on macOS Catalina 10.15.7 in case this matters",
"Code quality test now passes, thanks :) \r\n\r\nTo fix the other tests failing I think you can just rebase from master.\r\nAlso make sure that the dummy data test passes with\r\n```python\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_ler\r\n```",
"I will close this PR because abishek did the same better (https://github.com/huggingface/datasets/pull/944)",
"Sorry you had to close your PR ! It looks like this week's sprint doesn't always make it easy to see what's being added/what's already added. \r\nThank you for contributing to the library. You did a great job on adding LER so feel free to add other ones that you would like to see in the library, it will be a pleasure to review"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/4464 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4464/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4464/comments | https://api.github.com/repos/huggingface/datasets/issues/4464/events | https://github.com/huggingface/datasets/pull/4464 | 1,265,682,931 | PR_kwDODunzps45XlWW | 4,464 | Extend support for streaming datasets that use xml.dom.minidom.parse | [] | closed | false | null | 1 | 2022-06-09T06:58:25Z | 2022-06-09T08:43:24Z | 2022-06-09T08:34:16Z | null | This PR extends the support in streaming mode for datasets that use `xml.dom.minidom.parse`, by patching that function.
This PR adds support for streaming datasets like "Yaxin/SemEval2015".
Fix #4453. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4464/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4464/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4464.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4464",
"merged_at": "2022-06-09T08:34:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4464.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4464"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5443/comments | https://api.github.com/repos/huggingface/datasets/issues/5443/events | https://github.com/huggingface/datasets/pull/5443 | 1,550,178,914 | PR_kwDODunzps5ILbk8 | 5,443 | Update share tutorial | [] | closed | false | null | 2 | 2023-01-20T01:09:14Z | 2023-01-20T15:44:45Z | 2023-01-20T15:37:30Z | null | Based on feedback from discussion #5423, this PR updates the sharing tutorial with a mention of writing your own dataset loading script to support more advanced dataset creation options like multiple configs.
I'll open a separate PR to update the *Create a Dataset card* with the new Hub metadata UI update 😄 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5443/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5443/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5443.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5443",
"merged_at": "2023-01-20T15:37:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5443.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5443"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009885 / 0.011353 (-0.001468) | 0.005338 / 0.011008 (-0.005670) | 0.099967 / 0.038508 (0.061459) | 0.036860 / 0.023109 (0.013751) | 0.295283 / 0.275898 (0.019385) | 0.369504 / 0.323480 (0.046024) | 0.008267 / 0.007986 (0.000281) | 0.004375 / 0.004328 (0.000046) | 0.076294 / 0.004250 (0.072043) | 0.047058 / 0.037052 (0.010006) | 0.314463 / 0.258489 (0.055974) | 0.348125 / 0.293841 (0.054284) | 0.038334 / 0.128546 (-0.090213) | 0.012102 / 0.075646 (-0.063544) | 0.333049 / 0.419271 (-0.086223) | 0.050727 / 0.043533 (0.007195) | 0.299244 / 0.255139 (0.044105) | 0.318210 / 0.283200 (0.035010) | 0.112609 / 0.141683 (-0.029074) | 1.450377 / 1.452155 (-0.001778) | 1.485177 / 1.492716 (-0.007539) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287083 / 0.018006 (0.269077) | 0.564268 / 0.000490 (0.563778) | 0.003578 / 0.000200 (0.003378) | 0.000093 / 0.000054 (0.000039) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026755 / 0.037411 (-0.010657) | 0.105857 / 0.014526 (0.091331) | 0.118291 / 0.176557 (-0.058266) | 0.155735 / 0.737135 (-0.581401) | 0.122527 / 0.296338 (-0.173812) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396992 / 0.215209 (0.181783) | 3.958562 / 2.077655 (1.880908) | 1.781570 / 1.504120 (0.277451) | 1.617743 / 1.541195 (0.076549) | 1.753504 / 1.468490 (0.285013) | 0.681509 / 4.584777 (-3.903268) | 3.816910 / 3.745712 (0.071198) | 2.087359 / 5.269862 (-3.182503) | 1.328380 / 4.565676 (-3.237297) | 0.083542 / 0.424275 (-0.340733) | 0.012081 / 0.007607 (0.004473) | 0.505127 / 0.226044 (0.279082) | 5.075136 / 2.268929 (2.806208) | 2.259871 / 55.444624 (-53.184753) | 1.944302 / 6.876477 (-4.932175) | 2.102624 / 2.142072 (-0.039449) | 0.819779 / 4.805227 (-3.985448) | 0.165584 / 6.500664 (-6.335080) | 0.061774 / 0.075469 (-0.013695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.208258 / 1.841788 (-0.633530) | 14.841635 / 8.074308 (6.767327) | 14.484515 / 10.191392 (4.293123) | 0.156464 / 0.680424 (-0.523959) | 0.028839 / 0.534201 (-0.505362) | 0.440860 / 0.579283 (-0.138423) | 0.433892 / 0.434364 (-0.000472) | 0.515339 / 0.540337 (-0.024998) | 0.608838 / 1.386936 (-0.778098) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007548 / 0.011353 (-0.003804) | 0.005464 / 0.011008 (-0.005544) | 0.096987 / 0.038508 (0.058479) | 0.034472 / 0.023109 (0.011363) | 0.391249 / 0.275898 (0.115351) | 0.432779 / 0.323480 (0.109299) | 0.006170 / 0.007986 (-0.001816) | 0.004316 / 0.004328 (-0.000013) | 0.074184 / 0.004250 (0.069934) | 0.054254 / 0.037052 (0.017202) | 0.397947 / 0.258489 (0.139458) | 0.451253 / 0.293841 (0.157412) | 0.037098 / 0.128546 (-0.091449) | 0.012649 / 0.075646 (-0.062997) | 0.333533 / 0.419271 (-0.085739) | 0.050247 / 0.043533 (0.006714) | 0.390446 / 0.255139 (0.135307) | 0.410547 / 0.283200 (0.127347) | 0.110888 / 0.141683 (-0.030795) | 1.452160 / 1.452155 (0.000006) | 1.596331 / 1.492716 (0.103615) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.256061 / 0.018006 (0.238055) | 0.552674 / 0.000490 (0.552184) | 0.003362 / 0.000200 (0.003162) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030199 / 0.037411 (-0.007213) | 0.110288 / 0.014526 (0.095762) | 0.127412 / 0.176557 (-0.049145) | 0.165428 / 0.737135 (-0.571707) | 0.131658 / 0.296338 (-0.164680) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441946 / 0.215209 (0.226737) | 4.414209 / 2.077655 (2.336555) | 2.284530 / 1.504120 (0.780410) | 2.110752 / 1.541195 (0.569557) | 2.210751 / 1.468490 (0.742260) | 0.698829 / 4.584777 (-3.885948) | 3.819044 / 3.745712 (0.073332) | 3.274021 / 5.269862 (-1.995840) | 1.781284 / 4.565676 (-2.784393) | 0.085264 / 0.424275 (-0.339011) | 0.012360 / 0.007607 (0.004753) | 0.553519 / 0.226044 (0.327475) | 5.466395 / 2.268929 (3.197467) | 2.825839 / 55.444624 (-52.618786) | 2.439451 / 6.876477 (-4.437026) | 2.582534 / 2.142072 (0.440462) | 0.841644 / 4.805227 (-3.963583) | 0.172288 / 6.500664 (-6.328376) | 0.067215 / 0.075469 (-0.008254) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.283623 / 1.841788 (-0.558165) | 15.753163 / 8.074308 (7.678855) | 14.983263 / 10.191392 (4.791871) | 0.187584 / 0.680424 (-0.492840) | 0.017999 / 0.534201 (-0.516202) | 0.427157 / 0.579283 (-0.152126) | 0.435456 / 0.434364 (0.001092) | 0.496800 / 0.540337 (-0.043537) | 0.592557 / 1.386936 (-0.794379) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1561/comments | https://api.github.com/repos/huggingface/datasets/issues/1561/events | https://github.com/huggingface/datasets/pull/1561 | 765,831,436 | MDExOlB1bGxSZXF1ZXN0NTM5MTAwNjAy | 1,561 | Lama | [] | closed | false | null | 6 | 2020-12-14T03:27:10Z | 2020-12-28T09:51:47Z | 2020-12-28T09:51:47Z | null | This the LAMA dataset for probing facts and common sense from language models.
See https://github.com/facebookresearch/LAMA for more details. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1561/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1561/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1561.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1561",
"merged_at": "2020-12-28T09:51:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1561.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1561"
} | true | [
"Let me know why the pyarrow test is failing. For one of the config \"trex\", I had to load an initial datafile for a dictionary which is used to augment the rest of the datasets. In the dummy data, the dictionary file was truncated so I had to fudge that. I'm not sure if that is the issue.\r\n",
"@ontocord it just needs a rerun and it will be good to go.",
"THanks @tanmoyio. How do I do a rerun?",
"@ontocord contributor can’t rerun it, the maintainers will rerun it, it may take lil bit of time as there are so many PRs left to be reviewed and merged ",
"@lhoestq not sure why it is failing. i've made all modifications. ",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/7 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7/comments | https://api.github.com/repos/huggingface/datasets/issues/7/events | https://github.com/huggingface/datasets/pull/7 | 601,780,534 | MDExOlB1bGxSZXF1ZXN0NDA0OTgyMzA2 | 7 | Fix issue 5: allow empty datasets | [] | closed | false | null | 0 | 2020-04-17T07:59:56Z | 2020-04-29T09:27:13Z | 2020-04-20T13:23:48Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/7.diff",
"html_url": "https://github.com/huggingface/datasets/pull/7",
"merged_at": "2020-04-20T13:23:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/7.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/3550 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3550/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3550/comments | https://api.github.com/repos/huggingface/datasets/issues/3550/events | https://github.com/huggingface/datasets/issues/3550 | 1,096,522,377 | I_kwDODunzps5BW5qJ | 3,550 | Bug in `openbookqa` dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 1 | 2022-01-07T17:32:57Z | 2022-05-04T06:33:00Z | 2022-05-04T06:32:19Z | null | ## Describe the bug
Dataset entries contains a typo.
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> obqa = load_dataset('openbookqa', 'main')
>>> obqa['train'][0]
```
## Expected results
```python
{'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices': {'text': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting'], 'label': ['A', 'B', 'C', 'D']}, 'answerKey': 'D'}
```
## Actual results
```python
{'id': '7-980', 'question_stem': 'The sun is responsible for', 'choices': {'text': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting'], 'label': ['puppies learning new tricks', 'children growing up and getting old', 'flowers wilting in a vase', 'plants sprouting, blooming and wilting']}, 'answerKey': 'D'}
```
The bug is present in all configs and all splits.
## Environment info
- `datasets` version: 1.17.0
- Platform: Linux-5.4.0-1057-aws-x86_64-with-glibc2.27
- Python version: 3.9.7
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3550/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3550/timeline | null | completed | null | null | false | [
"Closed by:\r\n- #4259"
] |
https://api.github.com/repos/huggingface/datasets/issues/4952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4952/comments | https://api.github.com/repos/huggingface/datasets/issues/4952/events | https://github.com/huggingface/datasets/pull/4952 | 1,366,354,604 | PR_kwDODunzps4-meM0 | 4,952 | Add test-datasets CI job | [] | closed | false | null | 2 | 2022-09-08T13:38:30Z | 2022-09-16T13:28:02Z | 2022-09-16T13:25:48Z | null | To avoid having too many conflicts in the datasets and metrics dependencies I split the CI into test and test-catalog
test does the test of the core of the `datasets` lib, while test-catalog tests the datasets scripts and metrics scripts
This also makes `pip install -e .[dev]` much smaller for developers
WDYT @albertvillanova ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4952/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4952/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4952.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4952",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4952.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4952"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Closing this one since the dataset scripts will be removed in https://github.com/huggingface/datasets/pull/4974"
] |
https://api.github.com/repos/huggingface/datasets/issues/1473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1473/comments | https://api.github.com/repos/huggingface/datasets/issues/1473/events | https://github.com/huggingface/datasets/pull/1473 | 762,055,694 | MDExOlB1bGxSZXF1ZXN0NTM2NjQyODI5 | 1,473 | add srwac | [] | closed | false | null | 2 | 2020-12-11T08:20:29Z | 2020-12-17T11:40:59Z | 2020-12-17T11:40:59Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1473/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1473/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1473.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1473",
"merged_at": "2020-12-17T11:40:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1473.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1473"
} | true | [
"Connection error failed. Need rerun",
"merging since the CI is fixed on master"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/3339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3339/comments | https://api.github.com/repos/huggingface/datasets/issues/3339/events | https://github.com/huggingface/datasets/issues/3339 | 1,066,662,477 | I_kwDODunzps4_k_pN | 3,339 | to_tf_dataset fails on TPU | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 5 | 2021-11-30T00:50:52Z | 2021-12-02T14:21:27Z | null | null | Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs.
## Steps to reproduce the bug
I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGouFxqD4OuWfnycW_1TaT276z?usp=sharing
## Expected results
dataset from `to_tf_dataset` works in `model.fit`
Right below the first error in the colab I use `tf.data.Dataset.from_tensor_slices` and `model.fit` works just fine. This is the desired outcome.
## Actual results
```
InternalError: 5 root error(s) found.
(0) INTERNAL: {{function_node __inference_train_function_30558}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1638231897.932218653","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3151,"referenced_errors":[{"created":"@1638231897.932216754","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/lib/transport/error_utils.cc","file_line":161,"grpc_status":14}]}
[[{{node StatefulPartitionedCall}}]]
[[MultiDeviceIteratorGetNextFromShard]]
Executing non-communication op <MultiDeviceIteratorGetNextFromShard> originally returned UnavailableError, and was replaced by InternalError to avoid invoking TF network error handling logic.
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[tpu_compile_succeeded_assert/_14023832043698465348/_7/_439]]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
- Tensorflow 2.7.0
- `transformers` 4.12.5
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3339/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3339/timeline | null | null | null | null | false | [
"This might be related to https://github.com/tensorflow/tensorflow/issues/38762 , what do you think @Rocketknight1 ?\r\n> Dataset.from_generator is expected to not work with TPUs as it uses py_function underneath which is incompatible with Cloud TPU 2VM setup. If you would like to read from large datasets, maybe try to materialize it on disk and use TFRecordDataest instead.",
"Hi @lhoestq @nbroad1881, I think it's very similar, yes. Unfortunately `to_tf_dataset` uses `tf.numpy_function` which can't be compiled - this is a necessary evil to load from the underlying Arrow dataset. We need to update the notebooks/examples to clarify that this won't work, or to identify a workaround. You may be able to get it to work on an actual cloud TPU VM, but those are quite new and we haven't tested it yet. ",
"Thank you for the explanation. I didn't realize the nuances of `tf.numpy_function`. In this scenario, would it be better to use `export(format='tfrecord')` ? It's not quite the same, but for very large datasets that don't fit in memory it looks like it is the only option. I haven't used `export` before, but I do recall reading that there are suggestions for how big and how many tfrecords there should be to not bottleneck the TPU. It might be nice if there were a way for the `export` method to split the files up into appropriate chunk sizes depending on the size of the dataset and the number of devices. And if that is too much, it would be nice to be able to specify the number of files that would be created when using `export`. Well... maybe the user should just do the chunking themselves and call `export` a bunch of times. Whatever the case, you have been helpful. Thanks Tensorflow boy ;-) ",
"Yeah, this is something we really should have a proper guide on. I'll make a note to test some things and make a 'TF TPU best practices' notebook at some point, but in the meantime I think your solution of exporting TFRecords will probably work. ",
"Also: I knew that tweet would haunt me"
] |
https://api.github.com/repos/huggingface/datasets/issues/1345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1345/comments | https://api.github.com/repos/huggingface/datasets/issues/1345/events | https://github.com/huggingface/datasets/pull/1345 | 759,835,486 | MDExOlB1bGxSZXF1ZXN0NTM0NzY5NzMw | 1,345 | First commit of NarrativeQA Dataset | [] | closed | false | null | 0 | 2020-12-08T22:31:59Z | 2021-01-25T15:31:52Z | 2020-12-09T09:29:52Z | null | Added NarrativeQA dataset and included a manual downloading option to download scripts from the original scripts provided by the authors. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1345/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1345/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1345.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1345",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1345.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1345"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3159/comments | https://api.github.com/repos/huggingface/datasets/issues/3159/events | https://github.com/huggingface/datasets/pull/3159 | 1,035,174,560 | PR_kwDODunzps4toKD5 | 3,159 | Make inspect.get_dataset_config_names always return a non-empty list | [] | closed | false | null | 4 | 2021-10-25T13:59:43Z | 2021-10-29T13:14:37Z | 2021-10-28T05:44:49Z | null | Make all named configs cases, so that no special unnamed config case needs to be handled differently.
Fix #3135. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3159/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3159.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3159",
"merged_at": "2021-10-28T05:44:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3159.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3159"
} | true | [
"This PR is already working (although not very beautiful; see below): the idea was to have the `DatasetModule.builder_kwargs` accessible from the `builder_cls`, so that this can generate the default builder config (at the class level, without requiring the builder to be instantiated).\r\n\r\nI have a plan for a follow-up refactoring (same functionality, better implementation, much nicer), but I think we could already merge this, so that @severo can test it in the datasets previewer and report any potential issues.",
"Yes @lhoestq you are completely right. Indeed I was exclusively using `builder_cls.kwargs` to get the community dataset `name` (nothing else): \"lhoestq___demo1\"\r\n\r\nSee et: https://github.com/huggingface/datasets/pull/3159/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R413-R415\r\n\r\nIn your example, the `name` I was getting from `builder_cls.kwargs` was:\r\n```python\r\n{\"name\": \"lhoestq___demo1\",...}\r\n```\r\n\r\nI'm going to refactor all the approach... as I only need the name for this specific case ;)",
"I think this makes more sense now, @lhoestq @severo 😅 ",
"It works well, thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2197/comments | https://api.github.com/repos/huggingface/datasets/issues/2197/events | https://github.com/huggingface/datasets/pull/2197 | 854,356,559 | MDExOlB1bGxSZXF1ZXN0NjEyMzEzMzQw | 2,197 | fix missing indices_files in load_form_disk | [] | closed | false | null | 0 | 2021-04-09T09:37:57Z | 2021-04-09T09:54:40Z | 2021-04-09T09:54:39Z | null | This should fix #2195
`load_from_disk` was failing if there was no "_indices_files" field in state.json. This can happen if the dataset has no indices mapping | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2197/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2197/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2197.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2197",
"merged_at": "2021-04-09T09:54:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2197.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2197"
} | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.