url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
โ | active_lock_reason
null | body
stringlengths 0
228k
โ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1916/comments | https://api.github.com/repos/huggingface/datasets/issues/1916/events | https://github.com/huggingface/datasets/pull/1916 | 812,291,984 | MDExOlB1bGxSZXF1ZXN0NTc2NjgwNjY5 | 1,916 | Remove unused py_utils objects | [] | closed | false | null | 3 | 2021-02-19T19:51:25Z | 2021-02-22T14:56:56Z | 2021-02-22T13:32:49Z | null | Remove unused/unnecessary py_utils functions/classes. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1916/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1916/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1916.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1916",
"merged_at": "2021-02-22T13:32:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1916.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1916"
} | true | [
"Hmmm this one broke master. I'm fixing it.\r\n\r\nMaybe because your branch was outdated ?",
"Sorry @lhoestq, I forgot to update the imports... :/",
"It's fine, the CI should have caught this tbh. Not sure why it did't fail"
] |
https://api.github.com/repos/huggingface/datasets/issues/5522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5522/comments | https://api.github.com/repos/huggingface/datasets/issues/5522/events | https://github.com/huggingface/datasets/pull/5522 | 1,580,183,124 | PR_kwDODunzps5JvTVp | 5,522 | Minor changes in JAX-formatting docstrings & type-hints | [] | closed | false | null | 16 | 2023-02-10T19:05:00Z | 2023-02-15T14:48:27Z | 2023-02-15T13:19:06Z | null | Hi to whoever is reading this! ๐ค
## What's in this PR?
I was exploring the code regarding the `JaxFormatter` implemented in ๐ค`datasets`, and found some things that IMO could be changed. Those are mainly regarding the docstrings and the type-hints based on `jax`'s 0.4.1 release where `jax.Array` was introduced as the default type for JAX-arrays (instead of `jnp.DeviceArray`, `jnp.SharedDeviceArray`, and `jnp.GlobalDeviceArray`). Even though `isinstance(..., jax.Array)` also works with lower versions such as e.g. `0.3.25`.
More information about the latter at [`jax` v0.4.1 - Release Notes](https://github.com/google/jax/releases/tag/jax-v0.4.1) and [jax.Array migration - JAX documentation](https://jax.readthedocs.io/en/latest/jax_array_migration.html).
## What's missing?
* Do you want me to write an entry in the documentation on how to use ๐ค`datasets` with JAX as https://huggingface.co/docs/datasets/use_with_pytorch with PyTorch?
* Do we need to actually include `pyarrow` under the `TYPE_CHECKING` when needed? I just did it for JAX, but if we are OK with that, I can do that with the rest of the formatters, just LMK.
* Should the License header be included in `datasets.formatting.np_formatter`? If so, do I include the one from 2020 e.g. https://github.com/huggingface/datasets/blob/b065547654efa0ec633cf373ac1512884c68b2e1/src/datasets/formatting/tf_formatter.py#L1-L13
* Is there any reason why `jnp.array` is being used instead of `jnp.asarray`? There's no difference between both, just that `jnp.asarray` has `copy=False` as default, even though `numpy` to `jax.numpy` conversion is not zero-copy, but just asking :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5522/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5522/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5522.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5522",
"merged_at": "2023-02-15T13:19:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5522.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5522"
} | true | [
"P.S. For more context, I'm currently exploring the integration of ๐ค`datasets` with JAX, so in case you need any help or want me to try something specific just let me know! (`jnp.asarray`/`jnp.array(..., copy=False)` still no zero-copy ๐ญ)",
"_The documentation is not available anymore as the PR was closed or merged._",
"> Hi ! Thanks for improving this :)\r\n\r\nGlad to help, @lhoestq! Also, regarding the questions in the `## What's missing?` can I have your input? Thanks ๐ค ",
"Whoops forgot to reply to these matters - sorry x)\r\n\r\nYea a JAX guide would be welcome in the documentation ! This can be done in a separate PR if you want :)\r\n\r\nPyarrow is always imported with `datasets`, so it doesn't really matter if it's under TYPE_CHECKING or not.\r\n\r\nRegarding the license : yes indeed it should be in every file, thanks for reporting.\r\n\r\nNo big preference between jnp.array and jnp.asarray, unless one offers better performance",
"> Whoops forgot to reply to these matters - sorry x)\r\n> \r\n> Yea a JAX guide would be welcome in the documentation ! This can be done in a separate PR if you want :)\r\n> \r\n> Pyarrow is always imported with `datasets`, so it doesn't really matter if it's under TYPE_CHECKING or not.\r\n> \r\n> Regarding the license : yes indeed it should be in every file, thanks for reporting.\r\n> \r\n> No big preference between jnp.array and jnp.asarray, unless one offers better performance\r\n\r\nCool @lhoestq thanks for the input there!\r\n\r\n1. I can create a separate PR for JAX-format usage\r\n2. Regarding that, makes sense, we can just not put it there, unless it's more clear that in that file `pyarrow` is just required for typing?\r\n3. Do you want me to add the License? In this PR? In a separate one?\r\n4. Ideally `jnp.asarray` is similar to `np.asarray` which in the case of `numpy` tends to be more efficient as it does zero-copy when possible, while `np.array` has `copy=True` by default, anyway as I mentioned before (and as you already know) the copy from `numpy` to `jax` is not zero-copy, while the other way around (`jax` to `numpy`) it is",
"Thanks, feel free to create separate PRs for the docs and the license.\r\n\r\nI guess you can move the `pyarrow` import back to where it was for consistency with the other files and we can merge this one ;)",
"> Thanks, feel free to create separate PRs for the docs and the license.\r\n> \r\n> I guess you can move the `pyarrow` import back to where it was for consistency with the other files and we can merge this one ;)\r\n\r\nCool thanks I'll do that! ๐๐ป ",
"Actually I just checked and there are still tens of thousands of users with jax 0.3.25 - so we need to support older versions as well. I guess it comes from `transformers` which doesn't support jax 0.4 (and doesn't want to until the jax team stops breaking the lib all the time).\r\n\r\nCould you make sure your changes work with older versions as well ? Sorry for not spotting this earlier.\r\nIf we have `\"jax>=0.2.8,!=0.3.2,<=0.4.3\"` that'b be nice, and we can update the latest supported release from time to time.\r\n\r\nIn the CI you can add `jax==0.2.8` for the `deps-minimum` job, and use `jax~=0.4.1` for the `deps-latest`.",
"> Actually I just checked and there are still tens of thousands of users with jax 0.3.25 - so we need to support older versions as well. I guess it comes from `transformers` which doesn't support jax 0.4 (and doesn't want to until the jax team stops breaking the lib all the time).\r\n> \r\n> Could you make sure your changes work with older versions as well ? Sorry for not spotting this earlier. If we have `\"jax>=0.2.8,!=0.3.2,<=0.4.3\"` that'b be nice, and we can update the latest supported release from time to time.\r\n> \r\n> In the CI you can add `jax==0.2.8` for the `deps-minimum` job, and use `jax~=0.4.1` for the `deps-latest`.\r\n\r\nOk, didn't know that @lhoestq thanks for the detailed context! Sure, I'll update it and make sure it's also compatible with older versions.",
"Oops forgot to add you as co-author of the last commit @lhoestq my bad ๐ ",
"So it should be fixed right now @lhoestq! The thing is that `jax` doesn't provide support for Python 3.7 due to its EOL next June (more information at https://endoflife.date/python)...\r\n\r\nAnyway, I can confirm that `jax.Array` type works with 0.3.25 and that the following code works fine:\r\n\r\n```python\r\nimport jax\r\nimport jax.numpy as jnp\r\n\r\nx = jnp.ones((1, 10), dtype=jnp.float32) # Is a `jnp.DeviceArray`\r\nassert isinstance(x, jax.Array) # Is `True`\r\n```\r\n\r\nSo we can still use 0.3.25 as the maximum supported version, as well as 0.3.6 for `jaxlib` so as to be consistent with ๐ค`transformers`.\r\n\r\nThanks for your comments @lhoestq those were really useful!",
"Sorry for the spam, pinning versions leads to failure runs (not related to the type-hinting); I'll check that locally instead of here to avoid spam... Not pinning the dependencies work but I'll check the minimum required versions for both `jax` and `jaxlib` in Python 3.7",
"> Cool ! Thanks for trying to make the CI support it, but it's maybe not worth spending more time on this for now ^^\r\n> \r\n> merging :)\r\n\r\nDo you want me to work on the CI in a separate branch? Thanks for merging and for your help as always :)",
"> Do you want me to work on the CI in a separate branch? Thanks for merging and for your help as always :)\r\n\r\nIn the end I think we can keep it as is since we didn't modify the core code for jax. Maybe later if we do further changes and need to make sure we don't break anything ;) For example when we decide to add support for more recent versions",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010798 / 0.011353 (-0.000555) | 0.005690 / 0.011008 (-0.005318) | 0.116840 / 0.038508 (0.078332) | 0.041376 / 0.023109 (0.018266) | 0.345616 / 0.275898 (0.069718) | 0.413914 / 0.323480 (0.090434) | 0.009237 / 0.007986 (0.001252) | 0.004490 / 0.004328 (0.000162) | 0.085833 / 0.004250 (0.081582) | 0.050231 / 0.037052 (0.013179) | 0.367276 / 0.258489 (0.108787) | 0.393735 / 0.293841 (0.099894) | 0.043775 / 0.128546 (-0.084772) | 0.013215 / 0.075646 (-0.062432) | 0.391020 / 0.419271 (-0.028252) | 0.055102 / 0.043533 (0.011569) | 0.360333 / 0.255139 (0.105194) | 0.370531 / 0.283200 (0.087331) | 0.115484 / 0.141683 (-0.026199) | 1.694779 / 1.452155 (0.242625) | 1.756249 / 1.492716 (0.263532) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230508 / 0.018006 (0.212501) | 0.478681 / 0.000490 (0.478191) | 0.010305 / 0.000200 (0.010105) | 0.000147 / 0.000054 (0.000093) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030953 / 0.037411 (-0.006459) | 0.124320 / 0.014526 (0.109794) | 0.140417 / 0.176557 (-0.036140) | 0.189522 / 0.737135 (-0.547613) | 0.143635 / 0.296338 (-0.152704) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485995 / 0.215209 (0.270786) | 4.799668 / 2.077655 (2.722014) | 2.195655 / 1.504120 (0.691535) | 1.940073 / 1.541195 (0.398879) | 2.053853 / 1.468490 (0.585363) | 0.825399 / 4.584777 (-3.759378) | 4.522180 / 3.745712 (0.776468) | 2.484626 / 5.269862 (-2.785236) | 1.727617 / 4.565676 (-2.838059) | 0.098808 / 0.424275 (-0.325467) | 0.014753 / 0.007607 (0.007146) | 0.606798 / 0.226044 (0.380754) | 5.918090 / 2.268929 (3.649162) | 2.668124 / 55.444624 (-52.776500) | 2.300447 / 6.876477 (-4.576030) | 2.411203 / 2.142072 (0.269130) | 0.999826 / 4.805227 (-3.805401) | 0.193683 / 6.500664 (-6.306981) | 0.069341 / 0.075469 (-0.006129) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.455816 / 1.841788 (-0.385972) | 17.176476 / 8.074308 (9.102168) | 16.359100 / 10.191392 (6.167708) | 0.199669 / 0.680424 (-0.480755) | 0.033456 / 0.534201 (-0.500745) | 0.512478 / 0.579283 (-0.066805) | 0.526350 / 0.434364 (0.091986) | 0.637669 / 0.540337 (0.097332) | 0.753821 / 1.386936 (-0.633115) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008176 / 0.011353 (-0.003177) | 0.005862 / 0.011008 (-0.005147) | 0.086123 / 0.038508 (0.047615) | 0.037144 / 0.023109 (0.014035) | 0.398328 / 0.275898 (0.122430) | 0.439126 / 0.323480 (0.115647) | 0.006455 / 0.007986 (-0.001531) | 0.004575 / 0.004328 (0.000246) | 0.083396 / 0.004250 (0.079146) | 0.052827 / 0.037052 (0.015775) | 0.401039 / 0.258489 (0.142550) | 0.441374 / 0.293841 (0.147533) | 0.041671 / 0.128546 (-0.086875) | 0.014098 / 0.075646 (-0.061548) | 0.100873 / 0.419271 (-0.318398) | 0.058690 / 0.043533 (0.015157) | 0.395817 / 0.255139 (0.140678) | 0.409226 / 0.283200 (0.126026) | 0.119804 / 0.141683 (-0.021879) | 1.704583 / 1.452155 (0.252428) | 1.782527 / 1.492716 (0.289811) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.255166 / 0.018006 (0.237160) | 0.485091 / 0.000490 (0.484601) | 0.007458 / 0.000200 (0.007258) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034531 / 0.037411 (-0.002880) | 0.134332 / 0.014526 (0.119806) | 0.144944 / 0.176557 (-0.031613) | 0.199352 / 0.737135 (-0.537783) | 0.152243 / 0.296338 (-0.144095) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.495361 / 0.215209 (0.280152) | 4.895144 / 2.077655 (2.817489) | 2.350419 / 1.504120 (0.846299) | 2.112131 / 1.541195 (0.570937) | 2.234469 / 1.468490 (0.765978) | 0.815862 / 4.584777 (-3.768915) | 4.531638 / 3.745712 (0.785926) | 2.405186 / 5.269862 (-2.864676) | 1.559020 / 4.565676 (-3.006656) | 0.100432 / 0.424275 (-0.323843) | 0.014217 / 0.007607 (0.006610) | 0.614622 / 0.226044 (0.388577) | 5.984541 / 2.268929 (3.715613) | 2.929897 / 55.444624 (-52.514727) | 2.484010 / 6.876477 (-4.392467) | 2.533538 / 2.142072 (0.391466) | 0.972119 / 4.805227 (-3.833108) | 0.193630 / 6.500664 (-6.307034) | 0.073694 / 0.075469 (-0.001775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.503725 / 1.841788 (-0.338063) | 17.421529 / 8.074308 (9.347221) | 15.686433 / 10.191392 (5.495041) | 0.216688 / 0.680424 (-0.463736) | 0.020929 / 0.534201 (-0.513272) | 0.512523 / 0.579283 (-0.066760) | 0.499878 / 0.434364 (0.065514) | 0.639238 / 0.540337 (0.098900) | 0.769598 / 1.386936 (-0.617338) |\n\n</details>\n</details>\n\n\n",
"> > Do you want me to work on the CI in a separate branch? Thanks for merging and for your help as always :)\r\n> \r\n> In the end I think we can keep it as is since we didn't modify the core code for jax. Maybe later if we do further changes and need to make sure we don't break anything ;) For example when we decide to add support for more recent versions\r\n\r\nMakes sense, thank you @lhoestq!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3407/comments | https://api.github.com/repos/huggingface/datasets/issues/3407/events | https://github.com/huggingface/datasets/pull/3407 | 1,074,502,225 | PR_kwDODunzps4vjyrB | 3,407 | Use max number of data files to infer module | [] | closed | false | null | 1 | 2021-12-08T14:58:43Z | 2021-12-14T17:08:42Z | 2021-12-14T17:08:42Z | null | When inferring the module for datasets without script, set a maximum number of iterations over data files.
This PR fixes the issue of taking too long when hundred of data files present.
Please, feel free to agree on both numbers:
```
# Datasets without script
DATA_FILES_MAX_NUMBER = 10
ARCHIVED_DATA_FILES_MAX_NUMBER = 5
```
Fix #3404. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3407/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3407/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3407.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3407",
"merged_at": "2021-12-14T17:08:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3407.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3407"
} | true | [
"Cool thanks :) Feel free to merge if it's all good for you"
] |
https://api.github.com/repos/huggingface/datasets/issues/4819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4819/comments | https://api.github.com/repos/huggingface/datasets/issues/4819/events | https://github.com/huggingface/datasets/pull/4819 | 1,335,064,449 | PR_kwDODunzps48-xc6 | 4,819 | Add missing language tags to resources | [] | closed | false | null | 1 | 2022-08-10T19:06:42Z | 2022-08-10T19:45:49Z | 2022-08-10T19:32:15Z | null | Add missing language tags to resources, required by existing datasets on GitHub. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4819/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4819/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4819.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4819",
"merged_at": "2022-08-10T19:32:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4819.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4819"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3049/comments | https://api.github.com/repos/huggingface/datasets/issues/3049/events | https://github.com/huggingface/datasets/issues/3049 | 1,021,770,008 | I_kwDODunzps485vkY | 3,049 | TimeoutError during streaming | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-10-09T18:06:51Z | 2021-10-11T09:35:38Z | 2021-10-11T09:35:38Z | null | ## Describe the bug
I got a TimeoutError after streaming for about 10h.
## Steps to reproduce the bug
Very long code but we could do a test of streaming indefinitely data, though error may take a while to appear.
## Expected results
This error was not expected in the code which considers only `ClientError` but not `TimeoutError`.
See [this line](https://github.com/huggingface/datasets/blob/2814fbd0e18150be409f10804670e98d9ecb87d4/src/datasets/utils/streaming_download_manager.py#L129).
Based on the traceback, it looks like the `TimeoutError` was not captured.
## Actual results
```
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 25, in _runner
result[0] = await coro
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/implementations/http.py", line 614, in async_fetch_range
out = await r.read()
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1032, in read
self._body = await self.content.read()
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 370, in read
block = await self.readany()
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 392, in readany
await self._wait("readany")
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 306, in _wait
await waiter
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/helpers.py", line 656, in __exit__
raise asyncio.TimeoutError from None
asyncio.exceptions.TimeoutError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 1027, in <module>
main()
File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 991, in main
for batch in tqdm(
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/tqdm/std.py", line 1180, in __iter__
for obj in iterable:
File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 376, in data_loader_streaming
for item in dataset:
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 179, in __iter__
key_examples_list = [(key, example)] + [
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 179, in <listcomp>
key_examples_list = [(key, example)] + [
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 176, in __iter__
for key, example in iterator:
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 225, in __iter__
for x in self.ex_iterable:
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 99, in __iter__
for key, example in self.generate_examples_fn(**kwargs_with_shuffled_shards):
File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 287, in wrapper
for key, table in generate_tables_fn(**kwargs):
File "/home/koush/datasets/src/datasets/packaged_modules/json/json.py", line 107, in _generate_tables
batch = f.read(self.config.chunksize)
File "/home/koush/datasets/src/datasets/utils/streaming_download_manager.py", line 126, in read_with_retries
out = read(*args, **kwargs)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/implementations/http.py", line 572, in read
return super().read(length)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/spec.py", line 1533, in read
out = self.cache._fetch(self.loc, self.loc + length)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/caching.py", line 390, in _fetch
self.cache = self.fetcher(start, bend)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 91, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 69, in sync
raise FSTimeoutError from return_result
fsspec.exceptions.FSTimeoutError
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.2.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3049/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3049/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1186/comments | https://api.github.com/repos/huggingface/datasets/issues/1186/events | https://github.com/huggingface/datasets/pull/1186 | 757,826,660 | MDExOlB1bGxSZXF1ZXN0NTMzMTI1NjE4 | 1,186 | all test passed | [] | closed | false | null | 1 | 2020-12-06T02:12:32Z | 2020-12-07T15:06:55Z | 2020-12-07T15:06:55Z | null | need help creating dummy data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1186/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1186/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1186.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1186",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1186.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1186"
} | true | [
"looks like this PR includes changes to 5000 files\r\ncould you create a new branch and a new PR ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/1531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1531/comments | https://api.github.com/repos/huggingface/datasets/issues/1531/events | https://github.com/huggingface/datasets/pull/1531 | 764,752,882 | MDExOlB1bGxSZXF1ZXN0NTM4NjcwNzcz | 1,531 | adding hate-speech-and-offensive-language | [] | closed | false | null | 0 | 2020-12-13T01:59:07Z | 2020-12-13T02:17:02Z | 2020-12-13T02:17:02Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1531/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1531/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1531.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1531",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1531.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1531"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/1018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1018/comments | https://api.github.com/repos/huggingface/datasets/issues/1018/events | https://github.com/huggingface/datasets/pull/1018 | 755,570,882 | MDExOlB1bGxSZXF1ZXN0NTMxMjU3NTU2 | 1,018 | Add Sepedi NER | [] | closed | false | null | 1 | 2020-12-02T20:01:05Z | 2020-12-03T21:47:03Z | 2020-12-03T21:46:38Z | null | This is a new branch created for this dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1018/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1018/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1018.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1018",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1018.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1018"
} | true | [
"Sorry for this. I deleted sepedi_ner_corpus as per your earlier advise. Let me check. "
] |
https://api.github.com/repos/huggingface/datasets/issues/2588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2588/comments | https://api.github.com/repos/huggingface/datasets/issues/2588/events | https://github.com/huggingface/datasets/pull/2588 | 936,795,541 | MDExOlB1bGxSZXF1ZXN0NjgzNDQ5Njky | 2,588 | Fix test_is_small_dataset | [] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 0 | 2021-07-05T07:46:26Z | 2021-07-12T14:10:11Z | 2021-07-06T17:09:30Z | null | Remove environment variable fixture `env_max_in_memory_dataset_size`. This fixture does not work because env variable is read in datasets.config when first loading datasets, and it is never reread during tests. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2588/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2588.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2588",
"merged_at": "2021-07-06T17:09:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2588.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2588"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3593 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3593/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3593/comments | https://api.github.com/repos/huggingface/datasets/issues/3593/events | https://github.com/huggingface/datasets/pull/3593 | 1,107,070,852 | PR_kwDODunzps4xNhTu | 3,593 | Update README.md | [] | closed | false | null | 0 | 2022-01-18T15:52:16Z | 2022-01-20T17:14:53Z | 2022-01-20T17:14:53Z | null | Towards license of Tweet Eval parts | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3593/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3593/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3593.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3593",
"merged_at": "2022-01-20T17:14:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3593.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3593"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1535 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1535/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1535/comments | https://api.github.com/repos/huggingface/datasets/issues/1535/events | https://github.com/huggingface/datasets/pull/1535 | 764,977,542 | MDExOlB1bGxSZXF1ZXN0NTM4ODAwMDUw | 1,535 | Adding Igbo monolingual dataset | [] | closed | false | null | 1 | 2020-12-13T05:16:37Z | 2020-12-21T14:39:49Z | 2020-12-21T14:39:49Z | null | This PR adds the Igbo Monolingual dataset.
Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_monoling
Paper: https://arxiv.org/abs/2004.00648 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1535/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1535/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1535.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1535",
"merged_at": "2020-12-21T14:39:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1535.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1535"
} | true | [
"@lhoestq Thank you for the review. I have made all the changes you mentioned. PTAL! "
] |
https://api.github.com/repos/huggingface/datasets/issues/3502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3502/comments | https://api.github.com/repos/huggingface/datasets/issues/3502/events | https://github.com/huggingface/datasets/pull/3502 | 1,090,438,558 | PR_kwDODunzps4wXSLi | 3,502 | Add QuALITY | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 1 | 2021-12-29T10:58:46Z | 2022-10-03T09:36:14Z | 2022-10-03T09:36:14Z | null | Fixes #3441. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3502/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3502/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/3502.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3502",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3502.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3502"
} | true | [
"Thanks for your contribution, @jaketae. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] |
https://api.github.com/repos/huggingface/datasets/issues/951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/951/comments | https://api.github.com/repos/huggingface/datasets/issues/951/events | https://github.com/huggingface/datasets/pull/951 | 754,349,979 | MDExOlB1bGxSZXF1ZXN0NTMwMjY1MTY0 | 951 | Prachathai67k | [] | closed | false | null | 1 | 2020-12-01T12:21:52Z | 2020-12-01T12:29:53Z | 2020-12-01T12:28:26Z | null | Add `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com
The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018. The dataset was originally scraped by [@lukkiddd](https://github.com/lukkiddd) and cleaned by [@cstorm125](https://github.com/cstorm125). Download the dataset [here](https://www.dropbox.com/s/fsxepdka4l2pr45/prachathai-67k.zip?dl=1). You can also see preliminary exploration in [exploration.ipynb](https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb).
This dataset is a part of [pyThaiNLP](https://github.com/PyThaiNLP/) Thai text [classification-benchmarks](https://github.com/PyThaiNLP/classification-benchmarks). For the benchmark, we selected the following tags with substantial volume that resemble **classifying types of articles**:
* `เธเธฒเธฃเนเธกเธทเธญเธ` - politics
* `เธชเธดเธเธเธดเธกเธเธธเธฉเธขเธเธ` - human_rights
* `เธเธธเธเธ เธฒเธเธเธตเธงเธดเธ` - quality_of_life
* `เธเนเธฒเธเธเธฃเธฐเนเธเธจ` - international
* `เธชเธฑเธเธเธก` - social
* `เธชเธดเนเธเนเธงเธเธฅเนเธญเธก` - environment
* `เนเธจเธฃเธฉเธเธเธดเธ` - economics
* `เธงเธฑเธเธเธเธฃเธฃเธก` - culture
* `เนเธฃเธเธเธฒเธ` - labor
* `เธเธงเธฒเธกเธกเธฑเนเธเธเธ` - national_security
* `เนเธญเธเธตเธเธต` - ict
* `เธเธฒเธฃเธจเธถเธเธฉเธฒ` - education | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/951/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/951/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/951.diff",
"html_url": "https://github.com/huggingface/datasets/pull/951",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/951.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/951"
} | true | [
"Wrongly branching from existing branch of wisesight_sentiment. Closing and opening another one specifically for prachathai67k"
] |
https://api.github.com/repos/huggingface/datasets/issues/5988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5988/comments | https://api.github.com/repos/huggingface/datasets/issues/5988/events | https://github.com/huggingface/datasets/issues/5988 | 1,773,257,828 | I_kwDODunzps5pscRk | 5,988 | ConnectionError: Couldn't reach dataset_infos.json | [] | closed | false | null | 1 | 2023-06-25T12:39:31Z | 2023-07-07T13:20:57Z | 2023-07-07T13:20:57Z | null | ### Describe the bug
I'm trying to load codeparrot/codeparrot-clean-train, but get the following error:
ConnectionError: Couldn't reach https://huggingface.co/datasets/codeparrot/codeparrot-clean-train/resolve/main/dataset_infos.json (ConnectionError(ProtocolError('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))))
### Steps to reproduce the bug
train_data = load_dataset('codeparrot/codeparrot-clean-train', split='train')
### Expected behavior
download the dataset
### Environment info
centos7 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5988/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5988/timeline | null | completed | null | null | false | [
"Unfortunately, I can't reproduce the error. What does the following code return for you?\r\n```python\r\nimport requests\r\nfrom huggingface_hub import hf_hub_url\r\nr = requests.get(hf_hub_url(\"codeparrot/codeparrot-clean-train\", \"dataset_infos.json\", repo_type=\"dataset\"))\r\n```\r\n\r\nAlso, can you provide more info about your network (region, proxies, etc.)?"
] |
https://api.github.com/repos/huggingface/datasets/issues/408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/408/comments | https://api.github.com/repos/huggingface/datasets/issues/408/events | https://github.com/huggingface/datasets/pull/408 | 659,064,144 | MDExOlB1bGxSZXF1ZXN0NDUwOTU1MTE0 | 408 | Add tests datasets gcp | [] | closed | false | null | 0 | 2020-07-17T09:23:27Z | 2020-07-17T09:26:57Z | 2020-07-17T09:26:56Z | null | Some datasets are available on our google cloud storage in arrow format, so that the users don't need to process the data.
These tests make sure that they're always available. It also makes sure that their scripts are in sync between S3 and the repo.
This should avoid future issues like #407 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/408/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/408/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/408.diff",
"html_url": "https://github.com/huggingface/datasets/pull/408",
"merged_at": "2020-07-17T09:26:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/408.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/408"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3331/comments | https://api.github.com/repos/huggingface/datasets/issues/3331/events | https://github.com/huggingface/datasets/issues/3331 | 1,065,275,896 | I_kwDODunzps4_ftH4 | 3,331 | AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path' | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-11-28T08:54:05Z | 2021-11-29T13:49:44Z | 2021-11-29T13:34:14Z | null | ## Describe the bug
I add a new question answering dataset to huggingface datasets manually. Here is the link: [luozhouyang/question-answering-datasets](https://huggingface.co/datasets/luozhouyang/question-answering-datasets)
But when I load the dataset, an error raised:
```bash
AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path'
```
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("luozhouyang/question-answering-datasets", data_files=["dureader_robust.train.json"])
```
## Expected results
Load dataset successfully without any error.
## Actual results
```bash
Traceback (most recent call last):
File "/mnt/home/zhouyang.lzy/github/naivenlp/naivenlp/tests/question_answering_tests/dataset_test.py", line 89, in test_load_dataset_with_hf
data_files=["dureader_robust.train.json"],
File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1616, in load_dataset
**config_kwargs,
File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1443, in load_dataset_builder
path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files
File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1157, in dataset_module_factory
raise e1 from None
File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1144, in dataset_module_factory
download_mode=download_mode,
File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 798, in get_module
raise FileNotFoundError(f"No data files or dataset script found in {self.path}")
AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: linux
- Python version: 3.6.13
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3331/timeline | null | completed | null | null | false | [
"Hi,\r\n\r\nthe fix was merged and will be available in the next release of `datasets`.\r\nIn the meantime, you can use it by installing `datasets` directly from master as follows:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/1254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1254/comments | https://api.github.com/repos/huggingface/datasets/issues/1254/events | https://github.com/huggingface/datasets/pull/1254 | 758,518,774 | MDExOlB1bGxSZXF1ZXN0NTMzNjc5MTYy | 1,254 | Added WikiText-TL-39 | [] | closed | false | null | 1 | 2020-12-07T13:43:48Z | 2020-12-08T16:00:58Z | 2020-12-08T16:00:58Z | null | This PR adds the WikiText-TL-39 Filipino Language Modeling dataset.
Paper: https://arxiv.org/abs/1907.00409
Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1254/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1254/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1254.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1254",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1254.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1254"
} | true | [
"looks like this PR also includes changes about another dataset `covid_qa_deepset`\r\n\r\nCould you create another branch and another PR that only includes the changes for the wikitext-tl-39 dataset ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/1961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1961/comments | https://api.github.com/repos/huggingface/datasets/issues/1961/events | https://github.com/huggingface/datasets/pull/1961 | 818,077,947 | MDExOlB1bGxSZXF1ZXN0NTgxNDM3NDI0 | 1,961 | Add sst dataset | [] | closed | false | null | 0 | 2021-02-28T02:08:29Z | 2021-03-04T10:38:53Z | 2021-03-04T10:38:53Z | null | Related to #1934—Add the Stanford Sentiment Treebank dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1961/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1961/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1961.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1961",
"merged_at": "2021-03-04T10:38:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1961.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1961"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4651/comments | https://api.github.com/repos/huggingface/datasets/issues/4651/events | https://github.com/huggingface/datasets/issues/4651 | 1,296,689,414 | I_kwDODunzps5NSekG | 4,651 | Add Flickr 30k Dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2022-07-07T01:59:08Z | 2022-07-14T02:09:45Z | 2022-07-14T02:09:45Z | null | ## Adding a Dataset
- **Name:** *Flickr 30k*
- **Description:** *To produce the denotation graph, we have created an image caption corpus consisting of 158,915 crowd-sourced captions describing 31,783 images. This is an extension of our previous Flickr 8k Dataset. The new images and captions focus on people involved in everyday activities and events.*
- **Paper:** *https://transacl.org/ojs/index.php/tacl/article/view/229/33*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/flickr30k_captions.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4651/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4651/timeline | null | completed | null | null | false | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/flickr30k-captions)."
] |
https://api.github.com/repos/huggingface/datasets/issues/5118 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5118/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5118/comments | https://api.github.com/repos/huggingface/datasets/issues/5118/events | https://github.com/huggingface/datasets/issues/5118 | 1,410,547,373 | I_kwDODunzps5UEz6t | 5,118 | Installing `datasets` on M1 computers | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-10-16T16:50:08Z | 2022-10-19T09:10:08Z | 2022-10-19T09:10:08Z | null | ## Describe the bug
I wanted to install `datasets` dependencies on my M1 (in order to start contributing to the project). However, I got an error regarding `tensorflow`.
On M1, `tensorflow-macos` needs to be installed instead. Can we add a conditional requirement, so that `tensorflow-macos` would be installed on M1?
## Steps to reproduce the bug
Fresh clone this project (on m1), create a virtualenv and run this:
```python
pip install -e ".[dev]"
```
## Expected results
Installation should be smooth, and all the dependencies should be installed on M1.
## Actual results
You should receive an error, saying pip couldn't find a version that matches this pattern:
```
tensorflow>=2.3,!=2.6.0,!=2.6.1
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.6.2.dev0
- Platform: macOS-12.6-arm64-arm-64bit
- Python version: 3.9.6
- PyArrow version: 7.0.0
- Pandas version: 1.5.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5118/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5118/timeline | null | completed | null | null | false | [
"Thanks for reporting, @david1542."
] |
https://api.github.com/repos/huggingface/datasets/issues/1036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1036/comments | https://api.github.com/repos/huggingface/datasets/issues/1036/events | https://github.com/huggingface/datasets/pull/1036 | 755,953,294 | MDExOlB1bGxSZXF1ZXN0NTMxNTc4MjQ4 | 1,036 | Add PerSenT | [] | closed | false | null | 2 | 2020-12-03T07:43:58Z | 2020-12-14T13:40:43Z | 2020-12-14T13:40:43Z | null | Added [Person's SentimenT](https://stonybrooknlp.github.io/PerSenT/) dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1036/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1036/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1036.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1036",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1036.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1036"
} | true | [
"looks like this PR contains changes in many other files than the ones for PerSenT\r\ncan you create another branch and another PR ?",
"closing since #1142 was merged"
] |
https://api.github.com/repos/huggingface/datasets/issues/5357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5357/comments | https://api.github.com/repos/huggingface/datasets/issues/5357/events | https://github.com/huggingface/datasets/pull/5357 | 1,495,029,602 | PR_kwDODunzps5FXNyR | 5,357 | Support torch dataloader without torch formatting | [] | closed | false | null | 7 | 2022-12-13T19:39:24Z | 2023-01-04T12:45:40Z | 2022-12-15T19:15:54Z | null | In https://github.com/huggingface/datasets/pull/5084 we make the torch formatting consistent with the map-style datasets formatting: a torch formatted iterable dataset will yield torch tensors.
The previous behavior of the torch formatting for iterable dataset was simply to make the iterable dataset inherit from `torch.utils.data.Dataset` to make it work in a torch DataLoader. However ideally an unformatted dataset should also work with a DataLoader. To fix that, `datasets.IterableDataset` should inherit from `torch.utils.data.IterableDataset`.
Since we don't want to import torch on startup, I created this PR to dynamically make the `datasets.IterableDataset` class inherit form the torch one when a `datasets.IterableDataset` is instantiated and if PyTorch is available.
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("c4", "en", streaming=True, split="train")
>>> import torch.utils.data
>>> isinstance(ds, torch.utils.data.IterableDataset)
True
>>> dataloader = torch.utils.data.DataLoader(ds, batch_size=32, num_workers=4)
>>> for example in dataloader:
...: ...
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5357/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5357/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5357.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5357",
"merged_at": "2022-12-15T19:15:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5357.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5357"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Need some more time to fix the tests, especially with pickle",
"> And I actually don't quite understand the idea - what's the motivation behind making only IterableDataset compatible with torch DataLoader without setting the format explicitly?\r\n\r\nSetting the format to pytorch = set the output types of the dataset to be pytorch tensors. However sometimes your dataset is not made of tensors but you still want to be able to use a pytorch DataLoader",
"A bit more context. \r\n\r\nThe arrow-backed `Dataset` supports `DataLoader(ds)` (even if the format is not \"torch\"), and we want to be able to do the same with `IterableDataset` for consistency. However, this is when the PyTorch internals come into play - an iterable dataset needs to be an instance of `torch.utils.data.IterableDataset` due to [this](https://github.com/pytorch/pytorch/blob/abc54f93145830b502400faa92bec86e05422fbd/torch/utils/data/dataloader.py#L276) check (notice there is no check for the map-style version). Hence the explicit subclassing in this PR.",
"Exactly :) Btw I just took your comments into account @polinaeterna , so feel free to review again",
"@lhoestq just checking, does this change still preserve the fix to the \"data duplicate when setting num_works > 1 with streaming data\" issue from before?\r\n\r\nhttps://github.com/huggingface/datasets/issues/3423",
"Yes :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5468/comments | https://api.github.com/repos/huggingface/datasets/issues/5468/events | https://github.com/huggingface/datasets/issues/5468 | 1,558,066,625 | I_kwDODunzps5c3jXB | 5,468 | Allow opposite of remove_columns on Dataset and DatasetDict | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | 9 | 2023-01-26T12:28:09Z | 2023-02-13T09:59:38Z | 2023-02-13T09:59:38Z | null | ### Feature request
In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code:
```python
COLUMNS_TO_KEEP = ["text", "audio"]
all_columns = gigaspeech["train"].column_names
columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP)
gigaspeech = gigaspeech.remove_columns(columns_to_remove)
```
This kind of thing happens a lot when you don't need to keep all columns from the dataset. It would be more convenient (and less error prone) if you could just write:
```python
gigaspeech = gigaspeech.keep_columns(["text", "audio"])
```
Internally, `keep_columns` could still call `remove_columns`, but it expresses more clearly what the user's intent is.
### Motivation
Less code to write for the user of the dataset.
### Your contribution
- | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5468/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5468/timeline | null | completed | null | null | false | [
"Hi! I agree it would be nice to have a method like that. Instead of `keep_columns`, we can name it `select_columns` to be more aligned with PyArrow's naming convention (`pa.Table.select`).",
"Hi, I am a newbie to open source and would like to contribute. @mariosasko can I take up this issue ?",
"Hey, I also want to work on this issue I am a newbie to open source. ",
"This sounds related to https://github.com/huggingface/datasets/issues/5474\r\n\r\nI'm fine with `select_columns`, or we could also override `select` to also accept a list of columns maybe ?",
"@lhoestq, I am planning to add a member function to the dataset class to perform the selection operation. Do you think its the right way to proceed? or there is a better option ?",
"Unless @mariosasko thinks otherwise, I think it can go in `Dataset.select()` :)\r\nThough some parameters like keep_in_memory, indices_cache_file_name or writer_batch_size wouldn't when selecting columns, so we would need to update the docstring as well",
"If someone wants to give it a shot, feel free to comment `#self-assign` and it will assign the issue to you.\r\n\r\nFeel free to ping us here if you have questions or if we can help :)",
"I would rather have this functionality as a separate method. IMO it's always better to be explicit than to have an API where a single method can do different/uncorrelated things (somewhat reminds me of Pandas, and there is probably a good reason why PyArrow is more rigid in this aspect).",
"In the end I also think it would be nice to have it as a separate method, this way we can also have it for `IterableDataset` (which can't have `select` for indices)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2523 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2523/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2523/comments | https://api.github.com/repos/huggingface/datasets/issues/2523/events | https://github.com/huggingface/datasets/issues/2523 | 925,421,008 | MDU6SXNzdWU5MjU0MjEwMDg= | 2,523 | Fr | [] | closed | false | null | 0 | 2021-06-19T15:56:32Z | 2021-06-19T18:48:23Z | 2021-06-19T18:48:23Z | null | __Originally posted by @lewtun in https://github.com/huggingface/datasets/pull/2469__ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2523/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2523/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5712 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5712/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5712/comments | https://api.github.com/repos/huggingface/datasets/issues/5712/events | https://github.com/huggingface/datasets/issues/5712 | 1,655,972,106 | I_kwDODunzps5itCEK | 5,712 | load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load() | [] | closed | false | null | 2 | 2023-04-05T16:47:10Z | 2023-04-06T08:32:37Z | 2023-04-05T17:17:44Z | null | ### Describe the bug
Hi,
I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1.
```python
ds = datasets.load_dataset(path=dataset_dir,
name=configuration,
data_dir=dataset_dir,
cache_dir=cache_dir,
aux_dir=aux_dir,
# download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD,
num_proc=18)
```
When upgrading datasets to 2.11.0, it fails with error
```
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare
super()._download_and_prepare(
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators
self.some_function()
File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function()
x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()})
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__
bytes = self.zip.open(key)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open
fheader = zef_file.read(sizeFileHeader)
File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read
self._file.seek(self._pos)
ValueError: seek of closed file
```
### Steps to reproduce the bug
Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()`
```python
with np.load(filename) as fp:
x_df = pd.DataFrame({'feature': fp['x'].tolist()})
```
I'll try to generate a short snippet that reproduces the error.
### Expected behavior
I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.8
- Huggingface_hub version: 0.12.0
- PyArrow version: 11.0.0
- Pandas version: 1.5.2
- numpy: 1.24.2
- This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5712/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5712/timeline | null | completed | null | null | false | [
"Closing since this is a duplicate of #5711",
"> Closing since this is a duplicate of #5711\r\n\r\nSorry @mariosasko , my internet went down went submitting the issue, and somehow it ended up creating a duplicate"
] |
https://api.github.com/repos/huggingface/datasets/issues/3409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3409/comments | https://api.github.com/repos/huggingface/datasets/issues/3409/events | https://github.com/huggingface/datasets/pull/3409 | 1,075,684,593 | PR_kwDODunzps4vnpU0 | 3,409 | Pass new_fingerprint in multiprocessing | [] | closed | false | null | 2 | 2021-12-09T15:12:00Z | 2022-08-19T10:41:04Z | 2021-12-09T17:38:43Z | null | Following https://github.com/huggingface/datasets/pull/3045
Currently one can pass `new_fingerprint` to `.map()` to use a custom fingerprint instead of the one computed by hashing the map transform. However it's ignored if `num_proc>1`.
In this PR I fixed that by passing `new_fingerprint` to `._map_single()` when `num_proc>1`.
More specifically, `new_fingerprint` with a suffix based on the process `rank` is passed, so that each process has a different `new_fingerprint`
cc @TevenLeScao @vlievin | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3409/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3409/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3409.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3409",
"merged_at": "2021-12-09T17:38:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3409.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3409"
} | true | [
"@lhoestq Hi~, does this support that `datasets.map(func, batched=True, batch_size, num_proc>1, new_fingerprint=\"func_v1\")` even if `func` can't pickle. I also notice that you said \"Unfortunately you need picklable mapping functions to make multiprocessing work :confused: Also feel free to open an issue or send me a dm if you are in a situation where the caching fails. I can help you with that :slight_smile:\" in [here](https://discuss.huggingface.co/t/how-to-deal-with-unpickable-objects-in-map/1547/8). So, I want to ask that is there a way for users to use multiprocessing in `datasets.map` when the `func` can't pickle? \r\nThanks in advance!",
"Yea you need your function to be picklable for multiprocessing, otherwise the main process is not able to pass your function to the child processes."
] |
https://api.github.com/repos/huggingface/datasets/issues/4682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4682/comments | https://api.github.com/repos/huggingface/datasets/issues/4682/events | https://github.com/huggingface/datasets/issues/4682 | 1,304,788,215 | I_kwDODunzps5NxXz3 | 4,682 | weird issue/bug with columns (dataset iterable/stream mode) | [] | open | false | null | 0 | 2022-07-14T13:26:47Z | 2022-07-14T13:26:47Z | null | null | I have a dataset online (CloverSearch/cc-news-mutlilingual) that has a bunch of columns, two of which are "score_title_maintext" and "score_title_description". the original files are jsonl formatted. I was trying to iterate through via streaming mode and grab all "score_title_description" values, but I kept getting key not found after a certain point of iteration. I found that some json objects in the file don't have "score_title_description". And in SOME cases, this returns a NONE and in others it just gets a key error. Why is there an inconsistency here and how can I fix it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4682/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4682/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1364/comments | https://api.github.com/repos/huggingface/datasets/issues/1364/events | https://github.com/huggingface/datasets/pull/1364 | 760,164,558 | MDExOlB1bGxSZXF1ZXN0NTM1MDQxNjUz | 1,364 | Narrative QA (Manual Download Stories) Dataset | [] | closed | false | null | 3 | 2020-12-09T09:33:59Z | 2021-01-25T15:31:51Z | 2021-01-25T15:31:31Z | null | Narrative QA with manual download for stories. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1364/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1364/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1364.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1364",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1364.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1364"
} | true | [
"Hi ! Maybe we can rename it `narrativeqa_manual` to make it explicit that this one requires manual download contrary to `narrativeqa` ?\r\nIt's important to have this one as well, in case the `narrativeqa` one suffers from download issues (checksums or dead links for example).\r\n\r\nYou can also copy the dataset card from `narrativeqa` and add the dummy data as well",
"Thanks @lhoestq will do all this and submit a request in the coming days. ๐ ",
"Closing this as another pull request is already done. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1279/comments | https://api.github.com/repos/huggingface/datasets/issues/1279/events | https://github.com/huggingface/datasets/pull/1279 | 759,108,726 | MDExOlB1bGxSZXF1ZXN0NTM0MTU4OTY5 | 1,279 | added para_pat | [] | closed | false | null | 2 | 2020-12-08T06:28:47Z | 2020-12-14T13:41:17Z | 2020-12-14T13:41:17Z | null | Dataset link : https://figshare.com/articles/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632
Working on README.md currently | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1279/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1279/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1279.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1279",
"merged_at": "2020-12-14T13:41:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1279.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1279"
} | true | [
"Updated with Translation feature type. Working on dataset tags and README",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/2064 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2064/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2064/comments | https://api.github.com/repos/huggingface/datasets/issues/2064/events | https://github.com/huggingface/datasets/pull/2064 | 833,002,360 | MDExOlB1bGxSZXF1ZXN0NTk0MDczOTQ1 | 2,064 | Fix ted_talks_iwslt version error | [] | closed | false | null | 0 | 2021-03-16T16:43:45Z | 2021-03-16T18:00:08Z | 2021-03-16T18:00:08Z | null | This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly.
Fixes #2059 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2064/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2064/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2064.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2064",
"merged_at": "2021-03-16T18:00:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2064.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2064"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2570/comments | https://api.github.com/repos/huggingface/datasets/issues/2570/events | https://github.com/huggingface/datasets/pull/2570 | 933,402,521 | MDExOlB1bGxSZXF1ZXN0NjgwNjEzNzc0 | 2,570 | Minor fix docs format for bertscore | [] | closed | false | null | 0 | 2021-06-30T07:42:12Z | 2021-06-30T15:31:01Z | 2021-06-30T15:31:01Z | null | Minor fix docs format for bertscore:
- link to README
- format of KWARGS_DESCRIPTION | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2570/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2570.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2570",
"merged_at": "2021-06-30T15:31:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2570.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2570"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4670 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4670/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4670/comments | https://api.github.com/repos/huggingface/datasets/issues/4670/events | https://github.com/huggingface/datasets/issues/4670 | 1,299,984,246 | I_kwDODunzps5NfC92 | 4,670 | Can't extract files from `.7z` zipfile using `download_and_extract` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 5 | 2022-07-10T18:16:49Z | 2022-07-15T13:02:07Z | 2022-07-15T13:02:07Z | null | ## Describe the bug
I'm adding a new dataset which is a `.7z` zip file in Google drive and contains 3 json files inside. I'm able to download the data files using `download_and_extract` but after downloading it throws this error:
```
>>> dataset = load_dataset("./datasets/mantis/")
Using custom data configuration default
Downloading and preparing dataset mantis/default to /Users/bhavitvyamalik/.cache/huggingface/datasets/mantis/default/1.1.0/611affa804ec53e2055a335cc1b8b213bb5a0b5142d919967729d5ee23c6bab4...
Downloading data: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 77.2M/77.2M [00:23<00:00, 3.28MB/s]
/Users/bhavitvyamalik/.cache/huggingface/datasets/downloads/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/load.py", line 1745, in load_dataset
use_auth_token=use_auth_token,
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/builder.py", line 595, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/builder.py", line 690, in _download_and_prepare
) from None
OSError: Cannot find data file.
Original error:
[Errno 20] Not a directory: '/Users/bhavitvyamalik/.cache/huggingface/datasets/downloads/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6/merged_train.json'
```
just before generating the splits. I checked `fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6` file and it's `7z` zip file (similar to downloaded Google drive file) which means it didn't get unzip. Do I need to unzip it separately and then pass the paths for train,dev,test files in `SplitGenerator`?
## Environment info
- `datasets` version: 1.18.4.dev0
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.8
- PyArrow version: 5.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4670/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4670/timeline | null | completed | null | null | false | [
"Hi @bhavitvyamalik, thanks for reporting.\r\n\r\nYes, currently we do not support 7zip archive compression: I think we should.\r\n\r\nAs a workaround, you could uncompress it explicitly, like done in e.g. `samsum` dataset: \r\n\r\nhttps://github.com/huggingface/datasets/blob/fedf891a08bfc77041d575fad6c26091bc0fce52/datasets/samsum/samsum.py#L106-L110\r\n",
"Related to this issue: https://github.com/huggingface/datasets/issues/3541",
"Sure, let me look into and check what can be done. Will keep you guys updated here!",
"Initially, I thought of solving this without any external dependency. Almost everywhere I saw `lzma` can be used for this but there is a caveat that lzma doesnโt work with 7z archives but only single files. In my case the 7z archive has multiple files so it didn't work. Is it fine to use external library here?",
"Hi @bhavitvyamalik, thanks for your investigation.\r\n\r\nOn Monday, I started a PR that will eventually close this issue as well: I'm linking it to this.\r\n- #4672\r\n\r\nLet me know what you think. "
] |
https://api.github.com/repos/huggingface/datasets/issues/5741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5741/comments | https://api.github.com/repos/huggingface/datasets/issues/5741/events | https://github.com/huggingface/datasets/pull/5741 | 1,665,860,919 | PR_kwDODunzps5OM9nZ | 5,741 | Fix CI warnings | [] | closed | false | null | 2 | 2023-04-13T07:17:02Z | 2023-04-13T09:48:10Z | 2023-04-13T09:40:50Z | null | Fix warnings in our CI tests. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5741/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5741/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5741.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5741",
"merged_at": "2023-04-13T09:40:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5741.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5741"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007448 / 0.011353 (-0.003905) | 0.005182 / 0.011008 (-0.005826) | 0.098718 / 0.038508 (0.060210) | 0.034594 / 0.023109 (0.011485) | 0.317301 / 0.275898 (0.041403) | 0.357800 / 0.323480 (0.034320) | 0.005860 / 0.007986 (-0.002126) | 0.004267 / 0.004328 (-0.000061) | 0.074876 / 0.004250 (0.070626) | 0.048002 / 0.037052 (0.010950) | 0.333360 / 0.258489 (0.074871) | 0.362080 / 0.293841 (0.068239) | 0.035957 / 0.128546 (-0.092589) | 0.012245 / 0.075646 (-0.063401) | 0.332970 / 0.419271 (-0.086301) | 0.050825 / 0.043533 (0.007293) | 0.313936 / 0.255139 (0.058797) | 0.340684 / 0.283200 (0.057485) | 0.106630 / 0.141683 (-0.035053) | 1.427898 / 1.452155 (-0.024257) | 1.547518 / 1.492716 (0.054801) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.296952 / 0.018006 (0.278945) | 0.515708 / 0.000490 (0.515218) | 0.004225 / 0.000200 (0.004025) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029365 / 0.037411 (-0.008046) | 0.111142 / 0.014526 (0.096616) | 0.124414 / 0.176557 (-0.052142) | 0.185227 / 0.737135 (-0.551908) | 0.129545 / 0.296338 (-0.166793) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403303 / 0.215209 (0.188094) | 4.044138 / 2.077655 (1.966483) | 1.803622 / 1.504120 (0.299502) | 1.615436 / 1.541195 (0.074242) | 1.703576 / 1.468490 (0.235086) | 0.706398 / 4.584777 (-3.878379) | 3.912995 / 3.745712 (0.167283) | 4.004575 / 5.269862 (-1.265287) | 2.101592 / 4.565676 (-2.464085) | 0.087280 / 0.424275 (-0.336995) | 0.012564 / 0.007607 (0.004957) | 0.508484 / 0.226044 (0.282440) | 5.089351 / 2.268929 (2.820422) | 2.269022 / 55.444624 (-53.175602) | 1.933375 / 6.876477 (-4.943102) | 2.136783 / 2.142072 (-0.005289) | 0.862624 / 4.805227 (-3.942603) | 0.172107 / 6.500664 (-6.328557) | 0.066694 / 0.075469 (-0.008775) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.172513 / 1.841788 (-0.669275) | 15.877519 / 8.074308 (7.803211) | 14.687476 / 10.191392 (4.496084) | 0.189392 / 0.680424 (-0.491032) | 0.017334 / 0.534201 (-0.516866) | 0.420201 / 0.579283 (-0.159082) | 0.418502 / 0.434364 (-0.015862) | 0.489130 / 0.540337 (-0.051207) | 0.580678 / 1.386936 (-0.806258) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007942 / 0.011353 (-0.003411) | 0.005312 / 0.011008 (-0.005696) | 0.074684 / 0.038508 (0.036176) | 0.035952 / 0.023109 (0.012843) | 0.349672 / 0.275898 (0.073774) | 0.377157 / 0.323480 (0.053678) | 0.006399 / 0.007986 (-0.001586) | 0.005769 / 0.004328 (0.001441) | 0.074283 / 0.004250 (0.070032) | 0.053217 / 0.037052 (0.016165) | 0.342545 / 0.258489 (0.084056) | 0.383663 / 0.293841 (0.089822) | 0.037234 / 0.128546 (-0.091312) | 0.012349 / 0.075646 (-0.063298) | 0.086522 / 0.419271 (-0.332749) | 0.049888 / 0.043533 (0.006355) | 0.337686 / 0.255139 (0.082547) | 0.361564 / 0.283200 (0.078365) | 0.104902 / 0.141683 (-0.036781) | 1.478259 / 1.452155 (0.026104) | 1.576376 / 1.492716 (0.083660) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.339760 / 0.018006 (0.321753) | 0.530946 / 0.000490 (0.530456) | 0.000474 / 0.000200 (0.000274) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029685 / 0.037411 (-0.007726) | 0.109409 / 0.014526 (0.094883) | 0.125579 / 0.176557 (-0.050978) | 0.175378 / 0.737135 (-0.561757) | 0.130672 / 0.296338 (-0.165667) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428456 / 0.215209 (0.213247) | 4.238731 / 2.077655 (2.161077) | 2.046703 / 1.504120 (0.542583) | 1.850701 / 1.541195 (0.309506) | 1.909290 / 1.468490 (0.440800) | 0.714314 / 4.584777 (-3.870463) | 3.816056 / 3.745712 (0.070344) | 2.118567 / 5.269862 (-3.151295) | 1.348017 / 4.565676 (-3.217659) | 0.087140 / 0.424275 (-0.337135) | 0.012546 / 0.007607 (0.004938) | 0.538041 / 0.226044 (0.311997) | 5.381822 / 2.268929 (3.112893) | 2.525685 / 55.444624 (-52.918939) | 2.178659 / 6.876477 (-4.697817) | 2.381054 / 2.142072 (0.238981) | 0.844404 / 4.805227 (-3.960823) | 0.171802 / 6.500664 (-6.328862) | 0.065630 / 0.075469 (-0.009839) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262187 / 1.841788 (-0.579600) | 16.197668 / 8.074308 (8.123360) | 15.148636 / 10.191392 (4.957244) | 0.152601 / 0.680424 (-0.527823) | 0.020238 / 0.534201 (-0.513963) | 0.420141 / 0.579283 (-0.159142) | 0.416295 / 0.434364 (-0.018068) | 0.487051 / 0.540337 (-0.053286) | 0.581942 / 1.386936 (-0.804994) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1795 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1795/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1795/comments | https://api.github.com/repos/huggingface/datasets/issues/1795/events | https://github.com/huggingface/datasets/pull/1795 | 797,021,730 | MDExOlB1bGxSZXF1ZXN0NTY0MDk5OTUz | 1,795 | Custom formatting for lazy map + arrow data extraction refactor | [] | closed | false | null | 8 | 2021-01-29T16:35:53Z | 2022-07-30T09:50:11Z | 2021-02-05T09:54:06Z | null | Hi !
This PR refactors the way data are extracted from pyarrow tables to extend it to the use of custom formatting functions.
While the internal storage of the dataset is always the Apache Arrow format, by setting a specific format on a dataset, you can cast the output of `datasets.Dataset.__getitem__` in NumPy/pandas/PyTorch/TensorFlow, on-the-fly.
A specific format can be activated with `datasets.Dataset.set_format`. For example: `dataset.set_format(type='torch', columns=['label'])`.
### What's new:
You can now also define your own formatting function that is applied on-the-fly. To do so you can pass your formatting function in the `transform` parameter of `datasets.Dataset.set_format`, and keep `type` to `None`.
A formatting function is a callable that takes a batch (as a dict, formatted as python) as input and returns a batch.
Here is an example to tokenize and pad tokens on-the-fly when accessing the samples:
```python
from datasets import load_dataset
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
def encode(batch):
return tokenizer(batch["sentence1"], padding="longest", truncation=True, max_length=512, return_tensors="pt")
dataset = load_dataset("glue", "mrpc", split="train")
dataset.set_format(transform=encode)
dataset.format
# {'type': 'custom', 'format_kwargs': {'transform': <function __main__.encode(batch)>}, 'columns': ['idx', 'label', 'sentence1', 'sentence2'], 'output_all_columns': False}
dataset[:2]
# {'input_ids': tensor([[ 101, 2572, 3217, ... 102]]), 'token_type_ids': tensor([[0, 0, 0, ... 0]]), 'attention_mask': tensor([[1, 1, 1, ... 1]])}
```
Let me know what you think of this API !
We can still change it if we want to.
Especially @sgugger since this may be useful when using `datasets` to train models.
EDIT: this was changed to `dataset.set_transform(encode)`
-------------------
Note:
I had to refactor the way data are extracted and formatted from pyarrow tables and I made it more robust and flexible. In particular I modularized it to be able to unit-test it properly. This was very helpful since I detected some bugs in the previous implementation and was able to fix them.
Some bugs I found and fixed:
- certain slices/ranges were not supported because negative ids were passed to pyarrow
- formatting as numpy/torch/tensorflow a column would make it lose its precision information (for example a column as `Value("float32")`) would be returned as a tensor of float64 (default behavior for numpy)
- on windows integers formatted as numpy/torch/tensorflow were not always int64 tensors by default but were int32
The unit tests for those are now really extensive :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1795/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1795/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1795.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1795",
"merged_at": "2021-02-05T09:54:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1795.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1795"
} | true | [
"This PR is amazing!!!\r\n\r\nI only looked at `arrow_dataset.py` and `formatting/formatting.py` but those look good to me.\r\n\r\nMy only (tiny) concern is the name of the function: I don't think it's self-evident that `set_format` applies a generic transformation, and some people might not look too far into the doc.\r\n\r\nMaybe we could have an `apply_transform` or `process_columns` method which is called by `set_format` (to keep backward compatibility)?",
"What about something like `.set_format` and `.set_transform` ?\r\n- set_format would be the same as right now, i.e. defined by a format type.\r\n- set_transform would define the transformation that is applied on output batches on-the-fly.\r\n\r\nI was also thinking about `._with_format` and `.with_transform`. It could be their equivalent but would create a **new** dataset with the corresponding format or transform ? I know @sgugger was interested in something like that.",
"Yup, I think that would make all of these options very clear!",
"I like all those options as well (as long as the `_` in `_with_format` is a typo ;-) )",
"Yes it's a typo indeed ;)\r\n\r\nAlright I'll do the changes !",
"I took all your suggestions into account, thanks :)\r\nLet me know if you have more comments",
"Hi @lhoestq , thanks for offering the set_transform() function. It is very handy to process large datasets on the fly. But I ran into a problem when using it (error message shown below). Since we are working with a large collection, there's no way to filter all invalid data points beforehand. Those invalid data points will be problematic with the set_transform and I don't find a good work-around to ignore them. I wonder if you can offer some advice on dealing with invalid data points in this case. Thank you!\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py\", line 198, in _worker_loop\r\n data = fetcher.fetch(index)\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1763, in __getitem__\r\n return self._getitem(\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1748, in _getitem\r\n formatted_output = format_table(\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 532, in format_table\r\n return formatter(pa_table, query_type=query_type)\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 281, in __call__\r\n return self.format_row(pa_table)\r\n File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 391, in format_row\r\n raise TypeError(\r\nTypeError: Custom formatting function must return a dict to be able to pick a row, but got None\r\n\r\n```\r\n",
"> Hi @lhoestq , thanks for offering the set_transform() function. It is very handy to process large datasets on the fly. But I ran into a problem when using it (error message shown below). Since we are working with a large collection, there's no way to filter all invalid data points beforehand. Those invalid data points will be problematic with the set_transform and I don't find a good work-around to ignore them. I wonder if you can offer some advice on dealing with invalid data points in this case. Thank you!\r\n> \r\n> ```\r\n> Traceback (most recent call last):\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py\", line 198, in _worker_loop\r\n> data = fetcher.fetch(index)\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in fetch\r\n> data = [self.dataset[idx] for idx in possibly_batched_index]\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 44, in <listcomp>\r\n> data = [self.dataset[idx] for idx in possibly_batched_index]\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1763, in __getitem__\r\n> return self._getitem(\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1748, in _getitem\r\n> formatted_output = format_table(\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 532, in format_table\r\n> return formatter(pa_table, query_type=query_type)\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 281, in __call__\r\n> return self.format_row(pa_table)\r\n> File \"/export/share/ruimeng/env/anaconda/envs/ir/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 391, in format_row\r\n> raise TypeError(\r\n> TypeError: Custom formatting function must return a dict to be able to pick a row, but got None\r\n> ```\r\n\r\nI found this trick can be helpful: return an empty dict in exception:\r\n```\r\ndef transform_fn(example):\r\n try:\r\n process_your_data(example)\r\n except Exception as e:\r\n print(e)\r\n return {'input_ids': [[]], 'token_type_ids': [[]], 'attention_mask': [[]]}\r\ntrain_dataset = datasets.load_dataset(...)\r\ntrain_dataset = train_dataset.with_transform(parse_fn)\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/1332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1332/comments | https://api.github.com/repos/huggingface/datasets/issues/1332/events | https://github.com/huggingface/datasets/pull/1332 | 759,679,135 | MDExOlB1bGxSZXF1ZXN0NTM0NjQxOTE5 | 1,332 | Add Open Subtitles Dataset | [] | closed | false | null | 0 | 2020-12-08T18:31:45Z | 2020-12-10T11:17:38Z | 2020-12-10T11:13:18Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1332/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1332/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1332.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1332",
"merged_at": "2020-12-10T11:13:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1332.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1332"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/1340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1340/comments | https://api.github.com/repos/huggingface/datasets/issues/1340/events | https://github.com/huggingface/datasets/pull/1340 | 759,765,408 | MDExOlB1bGxSZXF1ZXN0NTM0NzExMjc5 | 1,340 | :fist: ยกViva la Independencia! | [] | closed | false | null | 1 | 2020-12-08T20:43:43Z | 2020-12-14T10:36:01Z | 2020-12-14T10:36:01Z | null | Adds the Catalonia Independence Corpus for stance-detection of Tweets.
Ready for review! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 3,
"laugh": 4,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1340/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1340/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1340.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1340",
"merged_at": "2020-12-14T10:36:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1340.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1340"
} | true | [
"I've added the changes / fixes - ready for a second pass :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3513/comments | https://api.github.com/repos/huggingface/datasets/issues/3513/events | https://github.com/huggingface/datasets/pull/3513 | 1,092,569,802 | PR_kwDODunzps4weGWl | 3,513 | Add desc parameter to filter | [] | closed | false | null | 0 | 2022-01-03T14:44:18Z | 2022-01-05T18:31:25Z | 2022-01-05T18:31:25Z | null | Fix #3317 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3513/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3513/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3513.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3513",
"merged_at": "2022-01-05T18:31:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3513.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3513"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1039/comments | https://api.github.com/repos/huggingface/datasets/issues/1039/events | https://github.com/huggingface/datasets/pull/1039 | 756,000,478 | MDExOlB1bGxSZXF1ZXN0NTMxNjE3MDI2 | 1,039 | Update ADD NEW DATASET | [] | closed | false | null | 0 | 2020-12-03T08:58:32Z | 2020-12-03T09:18:28Z | 2020-12-03T09:18:10Z | null | This PR adds a couple of detail on cloning/rebasing the repo. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1039/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1039/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1039.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1039",
"merged_at": "2020-12-03T09:18:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1039.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1039"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2275/comments | https://api.github.com/repos/huggingface/datasets/issues/2275/events | https://github.com/huggingface/datasets/issues/2275 | 869,378,311 | MDU6SXNzdWU4NjkzNzgzMTE= | 2,275 | SNLI dataset has labels of -1 | [] | closed | false | null | 1 | 2021-04-28T00:32:25Z | 2021-05-17T13:34:18Z | 2021-05-17T13:34:18Z | null | There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107 or 124 of the test set.
It isn't clear what these labels mean. I found a [line of code](https://github.com/huggingface/datasets/blob/80e59ef178d3bb2090d091bc32315c655eb0633d/datasets/snli/snli.py#L94) that seems to put them in but it seems still unclear why they are there. The current workaround is to just drop the rows from any model being trained.
Perhaps the documentation should be updated. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2275/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2275/timeline | null | completed | null | null | false | [
"Hi @puzzler10, \r\nThose examples where `gold_label` field was empty, -1 label was alloted to it. In order to remove it you can filter the samples from train/val/test splits. Here's how you can drop those rows from the dataset:\r\n`dataset = load_dataset(\"snli\")`\r\n`dataset_test_filter = dataset['test'].filter(lambda example: example['label'] != -1)`\r\n\r\nI agree it should have been mentioned in the documentation. I'll raise a PR regarding the same. Thanks for pointing out!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2993/comments | https://api.github.com/repos/huggingface/datasets/issues/2993/events | https://github.com/huggingface/datasets/issues/2993 | 1,012,702,665 | I_kwDODunzps48XJ3J | 2,993 | Can't download `trivia_qa/unfiltered` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-09-30T23:00:18Z | 2021-10-01T19:07:23Z | 2021-10-01T19:07:22Z | null | ## Describe the bug
For some reason, I can't download `trivia_qa/unfilted`. A file seems to be missing... I am able to see it fine though the viewer tough...
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("trivia_qa", "unfiltered")
Downloading and preparing dataset trivia_qa/unfiltered (download: 3.07 GiB, generated: 27.23 GiB, post-processed: Unknown size, total: 30.30 GiB) to /gpfsscratch/rech/six/commun/datasets/trivia_qa/unfiltered/1.1.0/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6...
Traceback (most recent call last):
File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 251, in _add_context
with open(os.path.join(file_dir, fname), encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/gpfsscratch/rech/six/commun/datasets/downloads/extracted/9fcb7eddc6afd46fd074af3c5128931dfe4b548f933c925a23847faf4c1995ad/evidence/wikipedia/Peanuts.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/load.py", line 852, in load_dataset
use_auth_token=use_auth_token,
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 616, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split
disable=bool(logging.get_verbosity() == logging.NOTSET),
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 303, in _generate_examples
example = parse_example(article)
File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 274, in parse_example
_add_context(article.get("EntityPages", []), "WikiContext", wiki_dir),
File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 253, in _add_context
except (IOError, datasets.Value("errors").NotFoundError):
File "<string>", line 5, in __init__
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/features.py", line 265, in __post_init__
self.pa_type = string_to_arrow(self.dtype)
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/features.py", line 134, in string_to_arrow
f"Neither {datasets_dtype} nor {datasets_dtype + '_'} seems to be a pyarrow data type. "
ValueError: Neither errors nor errors_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions
```
## Expected results
I am able to load another subset (`rc`), but unable to load.
I am not sure why the try/except doesn't catch it...
https://github.com/huggingface/datasets/blob/9675a5a1e7b99a86f9c250f6ea5fa5d1e6d5cc7d/datasets/trivia_qa/trivia_qa.py#L253
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: Linux-4.18.0-147.51.2.el8_1.x86_64-x86_64-with-redhat-8.1-Ootpa
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2993/timeline | null | completed | null | null | false | [
"wooo that was fast! thank you @lhoestq !\r\nit is able to process now, though it's ignoring all files and ending up with 0 examples now haha :/\r\n\r\nFor subset \"unfiltered\":\r\n```python\r\n>>> load_dataset(\"trivia_qa\", \"unfiltered\")\r\nDownloading and preparing dataset trivia_qa/unfiltered (download: 3.07 GiB, generated: 27.23 GiB, post-processed: Unknown size, total: 30.30 GiB) to /gpfsscratch/rech/six/commun/datasets/trivia_qa/unfiltered/1.1.0/910043a609bb2bdf62b4874f68e0c24fb648cf81e40a358f4bd54c919d72c9ab...\r\n100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 1354.53it/s]\r\n100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2/2 [00:00<00:00, 40.60it/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/load.py\", line 1198, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py\", line 647, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py\", line 748, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=2906575347, num_examples=10832, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}, {'expected': SplitInfo(name='validation', num_bytes=3038966234, num_examples=11313, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='validation', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}]\r\n```\r\nFor subset \"rc\":\r\n```python\r\n>>> load_dataset(\"trivia_qa\", \"rc\")\r\nDownloading and preparing dataset trivia_qa/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to /gpfsscratch/rech/six/commun/datasets/trivia_qa/rc/1.1.0/910043a609bb2bdf62b4874f68e0c24fb648cf81e40a358f4bd54c919d72c9ab...\r\n100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 3806.08it/s]\r\n100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 1/1 [00:00<00:00, 51.57it/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/load.py\", line 1198, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py\", line 647, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py\", line 748, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=1577814583, num_examples=17210, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}, {'expected': SplitInfo(name='train', num_bytes=12750976012, num_examples=138384, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}, {'expected': SplitInfo(name='validation', num_bytes=1688535379, num_examples=18669, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='validation', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}]\r\n```\r\n\r\nCould you look into that when you get a chance?\r\nI wonder if it's not something they changed on the file to download? i couldn't find any information",
"@VictorSanh have you tried passing `download_mode=\"force_redownload\"`?\r\n```python\r\nds = load_dataset(\"trivia_qa\", \"unfiltered\", download_mode=\"force_redownload\")\r\n```",
"I aggressively rmed caches, especially rming the `datasets/downloads/extracted/c3d265fa20d99a147a76e4f5e...` solved the issue.\r\nthank you both!\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3224/comments | https://api.github.com/repos/huggingface/datasets/issues/3224/events | https://github.com/huggingface/datasets/pull/3224 | 1,046,495,831 | PR_kwDODunzps4uLk2q | 3,224 | User-pickling with dynamic sub-classing | [] | open | false | null | 18 | 2021-11-06T12:08:24Z | 2022-07-06T15:19:48Z | null | null | This is a continuation of the now closed PR in https://github.com/huggingface/datasets/pull/3206. The discussion there has shaped a new approach to do this.
In this PR, behavior of `pklregister` and `Pickler` is extended. Earlier, users were already able to register custom pickle functions. That is useful if they have objects that are not easily picklable with default methods. When one registers a custom function to a type, an object of that type will be pickled with the given function by `Pickler` which looks up the type in its `dispatch` table. The downside of this method, and of `pickle` in general, is that it is limited to direct type-matching and does not allow sub-classes. In many, default, cases that is not an issue. But when you are using external libraries where classes (e.g. parsers, models) are sub-classed this is not ideal.
```python
from datasets.fingerprint import Hasher
from datasets.utils.py_utils import pklregister
class BaseParser:
pass
class EnglishParser(BaseParser):
pass
@pklregister(BaseParser)
def custom_pkl_func(pickler, obj):
print(f"Called the custom pickle function for type {type(obj)}!")
# do something with the obj and ultimately save with the pickler
base = BaseParser()
en = EnglishParser()
# Hasher.hash uses the Pickler behind the scenes
# `custom_pkl_func` called for base
Hasher.hash(base)
# `custom_pkl_func` not called for en :-(
Hasher.hash(en)
```
In the example above we'd want to sub-class `EnglishParser` to be handled in the same way as its super-class `BaseParser`. This PR solves that by allowing for a keyword-argument `allow_subclasses` in `pklregister` (default: `False`).
```python
@pklregister(BaseParser, allow_subclasses=True)
```
When this option is enabled, we not only save the function in `Pickler.dispatch` but also save it in a custom table `Pickler.subclass_dispatch` **which allows us to dynamically add sub-classes of that class to the real dispatch table**. Then, if we want to pickle an object `obj` with `Pickler.dump()` (which ultimately will call `Pickler.save()`) we _first_ check whether any of the object's super-classes exist in `Pickler.sublcass_dispatch` and get the related custom pickle function. If we find one, we add the type of `obj` alongside the function to `Pickler.dispatch`. All of this happens at the start of the call to `Pickler.save()`. _Only then_ dill.Pickler's `save` will be called, which in turn will call `pickle._Pickler.save` which handles everything. Here, the `Pickler.dispatch` table will be used to look up custom pickler functions - and it now also includes the function for `obj`, which was copied from its super-class, which we added at the very start of our custom `Pickler.save()`.
For edge cases and, especially, for testing, a contextmanager class `TempPickleRegistry` is included that resets the pickle registry on exit to its previous state.
```python
with TempPickleRegistry():
@pklregister(MyObjClass)
def pickle_registry_test_false(pickler, obj):
pickler.save(obj.fancy_method())
some_obj = MyObjClass()
dumps(some_obj)
# `MyObjClass` is in Pickler.dispatch
# ... `MyObjClass` is _not_ in Pickler.dispatch anymore
```
closes https://github.com/huggingface/datasets/issues/3178
To Do
====
- [x] Write tests
- [ ] Write documentation/examples? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3224/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3224/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/3224.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3224",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3224.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3224"
} | true | [
"@lhoestq Feel free to have a look. The implementation is slightly different from what you suggested. I have opted to overwrite `save` instead of meddling with `save_global`. `save_global` is called very late down in dill/pickle so it is hard to control for what is happening there. I might be wrong. Pickling is more complex than I thought! \r\n\r\nThe linked issue (`map` with spaCy) also works now!\r\n\r\n```python\r\nimport pickle\r\nimport spacy\r\nfrom spacy import Language\r\nfrom datasets import load_dataset\r\nfrom datasets.utils.py_utils import dumps, pklregister\r\n\r\n@pklregister(Language, allow_subclasses=True)\r\ndef hash_spacy_language(pickler, nlp: Language):\r\n pickler.save(nlp.to_bytes())\r\n\r\ndef main():\r\n fin = r\"large/file.txt\"\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n\r\n def tokenize(l):\r\n return {\"tok\": [t.text for t in nlp(l[\"text\"])]}\r\n\r\n ds = load_dataset(\"text\", data_files=fin)\r\n ds = ds[\"train\"].map(tokenize)\r\n\r\n # Sanity check: load NLP from pickle created with our own `dumps`\r\n config = nlp.config\r\n lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n nlp2 = lang_cls.from_config(config)\r\n nlp2.from_bytes(pickle.loads(dumps(nlp)))\r\n\r\n assert isinstance(nlp2, type(nlp))\r\n assert dumps(nlp) == dumps(nlp2)\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\nIf this all looks good to you, I'll start writing on some documentation and examples.\r\n",
"One more thing. This is a reduction function for SpaCy Language that should work with the new API:\r\n```python\r\n@pklregister(Language, allow_subclasses=True)\r\ndef hash_spacy_language(pickler, obj):\r\n def create_language(config, bytes_data):\r\n lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n nlp = lang_cls.from_config(config)\r\n return nlp.from_bytes(bytes_data)\r\n\r\n args = (obj.config, obj.to_bytes())\r\n pickler.save_reduce(create_language, args, obj=obj)\r\n```\r\nso IMO we are missing a test with `pickler.save_reduce`. ",
"> One more thing. This is a reduction function for SpaCy Language that should work with the new API:\r\n> \r\n> ```python\r\n> @pklregister(Language, allow_subclasses=True)\r\n> def hash_spacy_language(pickler, obj):\r\n> def create_language(config, bytes_data):\r\n> lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n> nlp = lang_cls.from_config(config)\r\n> return nlp.from_bytes(bytes_data)\r\n> \r\n> args = (obj.config, obj.to_bytes())\r\n> pickler.save_reduce(create_language, args, obj=obj)\r\n> ```\r\n> \r\n> so IMO we are missing a test with `pickler.save_reduce`.\r\n\r\nSure that seems a good idea, but I do not quite understand what `save_reduce` does. Could you give some more info about what reduce functions do and how they differ from regular `save` and `save_global`? I've read about it but the docs nor the built-in `pickle` code seem really helpful.",
"I'm no pickle expect, but here is my understanding. I believe the pickler uses the reduce function when you do `loads` to reconstructs the original object from the parameters/arguments that were saved with `dumps`.\r\n\r\nFor example your sanity check could be simplified from\r\n```python\r\n config = nlp.config\r\n lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n nlp2 = lang_cls.from_config(config)\r\n nlp2 = nlp2.from_bytes(pickle.loads(dumps(nlp)))\r\n```\r\nto\r\nEDIT: <s>pickle.loads(pickle.dumps(nlp))</s>\r\n```python\r\n nlp2 = loads(dumps(nlp)) # using our custom pickler\r\n```\r\n\r\nThough note that while it can be convenient for tests, we actually don't care about the reconstruction of the object since we're only using the pickler for `dumps` to compute hashes.",
"> I'm no pickle expect, but here is my understanding. I believe the pickler uses the reduce function when you do `loads` to reconstructs the original object from the parameters/arguments that were saved with `dumps`.\r\n> \r\n> For example your sanity check could be simplified from\r\n> \r\n> ```python\r\n> config = nlp.config\r\n> lang_cls = spacy.util.get_lang_class(config[\"nlp\"][\"lang\"])\r\n> nlp2 = lang_cls.from_config(config)\r\n> nlp2 = nlp2.from_bytes(pickle.loads(dumps(nlp)))\r\n> ```\r\n> \r\n> to\r\n> \r\n> ```python\r\n> nlp2 = pickle.loads(pickle.dumps(nlp))\r\n> ```\r\n> \r\n> Though note that while it can be convenient for tests, we actually don't care about the reconstruction of the object since we're only using the pickler for `dumps` to compute hashes.\r\n\r\nYes, the sanity check can be simplified like that _if_ we use `pickle.dumps` - but that would not test our own `dumps` functionality and would do a naive dump instead of using `to_bytes`. It won't work if we use our own `dumps`, exactly because of the reason that we want custom pickling and being able to call `to_bytes`. To reconstruct the object from the pickled bytes from `to_bytes` we need `from_bytes`. The result of pickle/dill loads will therefore always be a `bytes` object and not a `Language` object.\r\n\r\nBut `save_reduce` is called when saving, right? Not when loading, AFAICT. I am just not sure what exactly it is saving. It is _potentially_ called [at the end of `save`](https://github.com/python/cpython/blob/24af9a40a8f85af813ea89998aa4e931fcc78cd9/Lib/pickle.py#L603) but only if we haven't returned by then. I just can't figure out what that base case is.",
"I don't think we expect users to write the reduce function that isn't going to be used anyway. So maybe let's stick with `save` ?",
"@BramVanroy \r\nAs I understand `save_reduce` is very similar to `copyreg.pickle`, so I'd suggest you to check the following links:\r\n* https://docs.python.org/3/library/copyreg.html#copyreg.pickle\r\n* https://docs.python.org/3/library/pickle.html#object.__reduce__\r\n\r\n\r\n@lhoestq \r\n> I don't think we expect users to write the reduce function that isn't going to be used anyway. So maybe let's stick with save ?\r\n\r\nI agree. \r\n\r\n`save_reduce` is very similar to `copyreg.pickle` and `object.__reduce__`, which are part of public API (and `save` isn't), so I expect more advanced users to know how to write their own reduction functions. But, as you say, `pklregister` should also work with `save` (even though I think `save` is a bit lower-level, and harder to understand than `save_reduce`).\r\n\r\nAll our examples in `py_utils` that use `pklregister` also use `save_reduce` in the last step, so my reduction for SpaCy is meant to be added there, and not to be written by users (because SpaCy is very popular, so the official support by us makes sense :)).\r\n\r\nAnd in the tests, let's ignore the reconstruction part of pickle/dill, because it's not important for us, and focus on the generated dumps. What do you think?",
"@mariosasko What exactly do you mean with \"isn't part of the public API\"? It is [a public method](https://github.com/python/cpython/blob/24af9a40a8f85af813ea89998aa4e931fcc78cd9/Lib/pickle.py#L535) in base pickle, just like `dump` is but maybe you mean something else.",
"@BramVanroy Oh sorry, it's public (not prefixed with `\"_\"`) but it's not documented in the docs. `save_reduce` is also not in the docs, but its signature/functionality is similar to `copyreg.pickle` and I see it more often being used in the projects on GH, so it's seems \"more public\" to me. ",
"Unfortunately I feel that pickle in general is under-documented. ๐ \r\n\r\nFor the documentation, I can add a brief example, maybe under \"How-to Guides\"? The only thing that isn't immediately obvious to me is how I can add that doc page to the TOC?",
"Yes great idea ! To add that doc page to the TOC, you just have to add it to the index.rst file in the \"How-to guides\" TOC section",
"@mariosasko @lhoestq Feel free to make any edits or suggestions in the text!",
"Hi @mariosasko. I wish you'd told me sooner, as I spent quite some time writing on this.\r\n\r\nI'm also not sure whether it is too advanced to have in the documentation. The spaCy use-case seems potentially frequent. Or do you wish to add that case to the defaults, and whenever new issues come up that seem like frequent/obvious cases, add those internally as well?",
"Documenting the internal `pklregister` is overkill IMO (and it can be kept in docstrings), but we can document something higher level like `register_hash_func` once it's implemented.\r\n\r\nSo we keep the nice documentation you've written (thank you!), except we can rename it to \"Advanced caching\" and show an API that is similar to\r\n```python\r\n>>> @register_hash_func(Language, allow_subclasses=True)\r\n>>> def hash_spacy_language(nlp: Language):\r\n>>> return (nlp.to_bytes(),)\r\n```\r\nThis way we keep the documentation centered around the public API rather than the internals that may evolve/be too complicated to fit only one section.\r\n\r\n> Or do you wish to add that case to the defaults, and whenever new issues come up that seem like frequent/obvious cases, add those internally as well?\r\n\r\nLet's add it to the defaults since it's a frequent use-case. And also allow users to control the hashing using the API mentioned above if they face other non-trivially-hashable objects",
"Sure, I can have a go at implementing spaCy as a built-in. Should it be included in the tests? (Therefore adding spaCy to the tests requirements.)\r\n\r\nNext, from your example, it seems that the return value of `register_hash_func` will be used in pickler.save automatically (calling pklregister a bit deeper). Any reason why it returns a tuple? I can work on this as well, if needed.",
"> Sure, I can have a go at implementing spaCy as a built-in. Should it be included in the tests? (Therefore adding spaCy to the tests requirements.)\r\n\r\nThat would be perfect !\r\n\r\n> Next, from your example, it seems that the return value of register_hash_func will be used in pickler.save automatically (calling pklregister a bit deeper). \r\n\r\nYes I think so. For example register_hash_func can call pklregister with the user's function, but wrapped to use pickler.save.\r\n\r\n> Any reason why it returns a tuple? I can work on this as well, if needed.\r\n\r\nIt can either return an arbitrary object or a tuple. I like it a bit better if it's a tuple, so users understand more easily how to make the function take into account more than one item for the hash. It's also consistent with the streamlit caching functions, that also require a tuple. No strong opinion on this though\r\n\r\nLet me know if I can help with anything",
"@lhoestq I do not have the time anymore to work on this. Can someone else pick this up?",
"Hi ! Sure someone else can continue this PR (either someone from HF, or other contributors can fork the PR).\r\nI think I can work on this next week or the week after, but if anyone wants to work on this earlier feel free to comment here :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1628 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1628/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1628/comments | https://api.github.com/repos/huggingface/datasets/issues/1628/events | https://github.com/huggingface/datasets/pull/1628 | 774,091,411 | MDExOlB1bGxSZXF1ZXN0NTQ1MDY5NTAy | 1,628 | made suggested changes to hate-speech-and-offensive-language | [] | closed | false | null | 0 | 2020-12-23T23:25:32Z | 2020-12-28T10:11:20Z | 2020-12-28T10:11:20Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1628/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1628/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1628.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1628",
"merged_at": "2020-12-28T10:11:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1628.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1628"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/5769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5769/comments | https://api.github.com/repos/huggingface/datasets/issues/5769/events | https://github.com/huggingface/datasets/issues/5769 | 1,673,441,182 | I_kwDODunzps5jvq-e | 5,769 | Tiktoken tokenizers are not pickable | [] | closed | false | null | 1 | 2023-04-18T16:07:40Z | 2023-05-04T18:55:57Z | 2023-05-04T18:55:57Z | null | ### Describe the bug
Since tiktoken tokenizer is not pickable, it is not possible to use it inside `dataset.map()` with multiprocessing enabled. However, you [made](https://github.com/huggingface/datasets/issues/5536) tiktoken's tokenizers pickable in `datasets==2.10.0` for caching. For some reason, this logic does not work in dataset processing and raises `TypeError: cannot pickle 'builtins.CoreBPE' object`
### Steps to reproduce the bug
```
from datasets import load_dataset
import tiktoken
dataset = load_dataset("stas/openwebtext-10k")
enc = tiktoken.get_encoding("gpt2")
tokenized = dataset.map(
process,
remove_columns=['text'],
desc="tokenizing the OWT splits",
num_proc=2,
)
def process(example):
ids = enc.encode(example['text'])
ids.append(enc.eot_token)
out = {'ids': ids, 'len': len(ids)}
return out
```
### Expected behavior
starts processing dataset
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.0-1021-oracle-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.13.4
- PyArrow version: 9.0.0
- Pandas version: 2.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5769/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5769/timeline | null | completed | null | null | false | [
"Thanks for reporting, @markovalexander.\r\n\r\nUnfortunately, I'm not able to reproduce the issue: the `tiktoken` tokenizer can be used within `Dataset.map`, both in my local machine and in a Colab notebook: https://colab.research.google.com/drive/1DhJroZgk0sNFJ2Mrz-jYgrmh9jblXaCG?usp=sharing\r\n\r\nAre you sure you are using `datasets` version 2.11.0?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2295/comments | https://api.github.com/repos/huggingface/datasets/issues/2295/events | https://github.com/huggingface/datasets/pull/2295 | 872,902,867 | MDExOlB1bGxSZXF1ZXN0NjI3NzY0NDk3 | 2,295 | Create ExtractManager | [
{
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior",
"id": 2851292821,
"name": "refactoring",
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring"
}
] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 2 | 2021-04-30T17:13:34Z | 2021-07-12T14:12:03Z | 2021-07-08T08:11:49Z | null | Perform refactoring to decouple extract functionality. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2295/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2295/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2295.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2295",
"merged_at": "2021-07-08T08:11:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2295.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2295"
} | true | [
"Hi @lhoestq,\r\n\r\nOnce that #2578 has been merged, I would like to ask you to have a look at this PR: it implements the same logic as the one in #2578 but for all the other file compression formats.\r\n\r\nThanks.",
"I think all is done @lhoestq ;)"
] |
https://api.github.com/repos/huggingface/datasets/issues/816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/816/comments | https://api.github.com/repos/huggingface/datasets/issues/816/events | https://github.com/huggingface/datasets/issues/816 | 739,102,686 | MDU6SXNzdWU3MzkxMDI2ODY= | 816 | [Caching] Dill globalvars() output order is not deterministic and can cause cache issues. | [] | closed | false | null | 1 | 2020-11-09T15:01:20Z | 2020-11-11T15:20:50Z | 2020-11-11T15:20:50Z | null | Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues.
To fix that one could register an implementation of dill's `save_function` in the `datasets` pickler that sorts the globals keys before dumping a function. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/816/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/816/timeline | null | completed | null | null | false | [
"To show the issue:\r\n```\r\npython -c \"from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))\"\r\n```\r\ndoesn't always return the same ouput since `globs` is a dictionary with \"a\" and \"len\" as keys but sometimes not in the same order"
] |
https://api.github.com/repos/huggingface/datasets/issues/876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/876/comments | https://api.github.com/repos/huggingface/datasets/issues/876/events | https://github.com/huggingface/datasets/issues/876 | 748,195,104 | MDU6SXNzdWU3NDgxOTUxMDQ= | 876 | imdb dataset cannot be loaded | [] | closed | false | null | 5 | 2020-11-22T08:24:43Z | 2021-11-26T11:07:16Z | 2020-12-24T17:38:47Z | null | Hi
I am trying to load the imdb train dataset
`dataset = datasets.load_dataset("imdb", split="train")`
getting following errors, thanks for your help
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=32660064, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=26476338, num_examples=20316, dataset_name='imdb')}, {'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]
>>> dataset = datasets.load_dataset("imdb", split="train")
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/876/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/876/timeline | null | completed | null | null | false | [
"It looks like there was an issue while building the imdb dataset.\r\nCould you provide more information about your OS and the version of python and `datasets` ?\r\n\r\nAlso could you try again with \r\n```python\r\ndataset = datasets.load_dataset(\"imdb\", split=\"train\", download_mode=\"force_redownload\")\r\n```\r\nto make sure it's not a corrupted file issue ?",
"I was using version 1.1.2 and this resolved with version 1.1.3, thanks. ",
"Hello,\r\nI have the same pb with 1.8.0",
"Hi ! I just tried in 1.8.0 and it worked fine. Can you try again ? Maybe the dataset host had some issues that are fixed now",
"Hello,\r\nIt works fine now :) !\r\nThanks !"
] |
https://api.github.com/repos/huggingface/datasets/issues/3775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3775/comments | https://api.github.com/repos/huggingface/datasets/issues/3775/events | https://github.com/huggingface/datasets/pull/3775 | 1,146,849,454 | PR_kwDODunzps4zSEd4 | 3,775 | Update gigaword card and info | [] | closed | false | null | 3 | 2022-02-22T12:27:16Z | 2022-02-28T11:35:24Z | 2022-02-28T11:35:24Z | null | Reported on the forum: https://discuss.huggingface.co/t/error-loading-dataset/14999 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3775/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3775/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3775.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3775",
"merged_at": "2022-02-28T11:35:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3775.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3775"
} | true | [
"I think it actually comes from an issue here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/810b12f763f5cf02f2e43565b8890d278b7398cd/src/datasets/utils/file_utils.py#L575-L579\r\n\r\nand \r\n\r\nhttps://github.com/huggingface/datasets/blob/810b12f763f5cf02f2e43565b8890d278b7398cd/src/datasets/utils/streaming_download_manager.py#L386-L389\r\n\r\nThis code doesn't seem to work anymore. This can probably be fixed with\r\n\r\n```python\r\nif url.startswith(\"https://drive.google.com/\"): \r\n url += \"&confirm=t\"\r\n cookies = response.cookies \r\n```\r\n\r\nbecause Google Drive doesn't return the `download_warning` cookie anymore.",
"Actually it seems that is has been fixed already in https://github.com/huggingface/datasets/pull/3787 :)\r\n\r\nI think it should have fixed the gigaword dataset loading",
"@lhoestq The linked PR indeed fixes the issue. This PR is still worth merging IMO to update `gigaword`'s card."
] |
https://api.github.com/repos/huggingface/datasets/issues/968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/968/comments | https://api.github.com/repos/huggingface/datasets/issues/968/events | https://github.com/huggingface/datasets/pull/968 | 754,659,015 | MDExOlB1bGxSZXF1ZXN0NTMwNTIwMjEz | 968 | ADD Afrikaans NER | [] | closed | false | null | 1 | 2020-12-01T19:23:03Z | 2020-12-02T09:41:28Z | 2020-12-02T09:41:28Z | null | Afrikaans NER corpus | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/968/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/968/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/968.diff",
"html_url": "https://github.com/huggingface/datasets/pull/968",
"merged_at": "2020-12-02T09:41:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/968.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/968"
} | true | [
"One trick if you want to add other datasets: consider running these commands each time you want to add a new dataset\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit checkout -b add-<my_dataset_name>\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/5495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5495/comments | https://api.github.com/repos/huggingface/datasets/issues/5495/events | https://github.com/huggingface/datasets/issues/5495 | 1,566,803,452 | I_kwDODunzps5dY4X8 | 5,495 | to_tf_dataset fails with datetime UTC columns even if not included in columns argument | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | 2 | 2023-02-01T20:47:33Z | 2023-02-08T14:33:19Z | 2023-02-08T14:33:19Z | null | ### Describe the bug
There appears to be some eager behavior in `to_tf_dataset` that runs against every column in a dataset even if they aren't included in the columns argument. This is problematic with datetime UTC columns due to them not working with zero copy. If I don't have UTC information in my datetime column, then everything works as expected.
### Steps to reproduce the bug
```python
import numpy as np
import pandas as pd
from datasets import Dataset
df = pd.DataFrame(np.random.rand(2, 1), columns=["x"])
# df["dt"] = pd.to_datetime(["2023-01-01", "2023-01-01"]) # works fine
df["dt"] = pd.to_datetime(["2023-01-01 00:00:00.00000+00:00", "2023-01-01 00:00:00.00000+00:00"])
df.to_parquet("test.pq")
ds = Dataset.from_parquet("test.pq")
tf_ds = ds.to_tf_dataset(columns=["x"], batch_size=2, shuffle=True)
```
```
ArrowInvalid Traceback (most recent call last)
Cell In[1], line 12
8 df.to_parquet("test.pq")
11 ds = Dataset.from_parquet("test.pq")
---> 12 tf_ds = ds.to_tf_dataset(columns=["r"], batch_size=2, shuffle=True)
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:411, in TensorflowDatasetMixin.to_tf_dataset(self, batch_size, columns, shuffle, collate_fn, drop_remainder, collate_fn_args, label_cols, prefetch, num_workers)
407 dataset = self
409 # TODO(Matt, QL): deprecate the retention of label_ids and label
--> 411 output_signature, columns_to_np_types = dataset._get_output_signature(
412 dataset,
413 collate_fn=collate_fn,
414 collate_fn_args=collate_fn_args,
415 cols_to_retain=cols_to_retain,
416 batch_size=batch_size if drop_remainder else None,
417 )
419 if "labels" in output_signature:
420 if ("label_ids" in columns or "label" in columns) and "labels" not in columns:
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:254, in TensorflowDatasetMixin._get_output_signature(dataset, collate_fn, collate_fn_args, cols_to_retain, batch_size, num_test_batches)
252 for _ in range(num_test_batches):
253 indices = sample(range(len(dataset)), test_batch_size)
--> 254 test_batch = dataset[indices]
255 if cols_to_retain is not None:
256 test_batch = {key: value for key, value in test_batch.items() if key in cols_to_retain}
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2590, in Dataset.__getitem__(self, key)
2588 def __getitem__(self, key): # noqa: F811
2589 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools)."""
-> 2590 return self._getitem(
2591 key,
2592 )
File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2575, in Dataset._getitem(self, key, **kwargs)
2573 formatter = get_formatter(format_type, features=self.features, **format_kwargs)
2574 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)
-> 2575 formatted_output = format_table(
2576 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns
2577 )
2578 return formatted_output
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:634, in format_table(table, key, formatter, format_columns, output_all_columns)
632 python_formatter = PythonFormatter(features=None)
633 if format_columns is None:
--> 634 return formatter(pa_table, query_type=query_type)
635 elif query_type == "column":
636 if key in format_columns:
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:410, in Formatter.__call__(self, pa_table, query_type)
408 return self.format_column(pa_table)
409 elif query_type == "batch":
--> 410 return self.format_batch(pa_table)
File ~/venv/lib/python3.8/site-packages/datasets/formatting/np_formatter.py:78, in NumpyFormatter.format_batch(self, pa_table)
77 def format_batch(self, pa_table: pa.Table) -> Mapping:
---> 78 batch = self.numpy_arrow_extractor().extract_batch(pa_table)
79 batch = self.python_features_decoder.decode_batch(batch)
80 batch = self.recursive_tensorize(batch)
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in NumpyArrowExtractor.extract_batch(self, pa_table)
163 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in <dictcomp>(.0)
163 def extract_batch(self, pa_table: pa.Table) -> dict:
--> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names}
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:185, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array)
181 else:
182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all(
183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks
184 )
--> 185 array: List = [
186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)
187 ]
188 else:
189 if isinstance(pa_array.type, _ArrayXDExtensionType):
190 # don't call to_pylist() to preserve dtype of the fixed-size array
File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:186, in <listcomp>(.0)
181 else:
182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all(
183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks
184 )
185 array: List = [
--> 186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only)
187 ]
188 else:
189 if isinstance(pa_array.type, _ArrayXDExtensionType):
190 # don't call to_pylist() to preserve dtype of the fixed-size array
File ~/venv/lib/python3.8/site-packages/pyarrow/array.pxi:1475, in pyarrow.lib.Array.to_numpy()
File ~/venv/lib/python3.8/site-packages/pyarrow/error.pxi:100, in pyarrow.lib.check_status()
ArrowInvalid: Needed to copy 1 chunks with 0 nulls, but zero_copy_only was True
```
### Expected behavior
I think there are two potential issues/fixes
1. Proper handling of datetime UTC columns (perhaps there is something incorrect with zero copy handling here)
2. Not eagerly running against every column in a dataset when the columns argument of `to_tf_dataset` specifies a subset of columns (although I'm not sure if this is unavoidable)
### Environment info
- `datasets` version: 2.9.0
- Platform: macOS-13.2-x86_64-i386-64bit
- Python version: 3.8.12
- PyArrow version: 11.0.0
- Pandas version: 1.5.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5495/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5495/timeline | null | completed | null | null | false | [
"Hi! This is indeed a bug in our zero-copy logic.\r\n\r\nTo fix it, instead of the line:\r\nhttps://github.com/huggingface/datasets/blob/7cfac43b980ab9e4a69c2328f085770996323005/src/datasets/features/features.py#L702\r\n\r\nwe should have:\r\n```python\r\nreturn pa.types.is_primitive(pa_type) and not (pa.types.is_boolean(pa_type) or pa.types.is_temporal(pa_type))\r\n```",
"@mariosasko submitted a small PR [here](https://github.com/huggingface/datasets/pull/5504)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3084/comments | https://api.github.com/repos/huggingface/datasets/issues/3084/events | https://github.com/huggingface/datasets/issues/3084 | 1,026,428,992 | I_kwDODunzps49LhBA | 3,084 | VisibleDeprecationWarning when using `set_format("numpy")` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-10-14T13:53:01Z | 2021-10-22T16:04:14Z | 2021-10-22T16:04:14Z | null | Code to reproduce:
```
from datasets import load_dataset
dataset = load_dataset("glue", "mnli")
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased')
def tokenize_function(dataset):
return tokenizer(dataset['premise'])
tokenized_datasets = dataset.map(tokenize_function, batched=True, remove_columns=dataset['train'].features)
tokenized_datasets.set_format("numpy")
tokenized_datasets['train'][5:8]
```
Outputs:
```
python3.9/site-packages/datasets/formatting/formatting.py:167: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return np.array(array, copy=False, **self.np_array_kwargs)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3084/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3084/timeline | null | completed | null | null | false | [
"I just opened a PR and I verified that the code you provided doesn't show any deprecation warning :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/6061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6061/comments | https://api.github.com/repos/huggingface/datasets/issues/6061/events | https://github.com/huggingface/datasets/pull/6061 | 1,818,337,136 | PR_kwDODunzps5WOi79 | 6,061 | Dill 3.7 support | [] | closed | false | null | 5 | 2023-07-24T12:33:58Z | 2023-07-24T14:13:20Z | 2023-07-24T14:04:36Z | null | Adds support for dill 3.7. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6061/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6061/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6061.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6061",
"merged_at": "2023-07-24T14:04:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6061.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6061"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007700 / 0.011353 (-0.003653) | 0.004680 / 0.011008 (-0.006328) | 0.098812 / 0.038508 (0.060304) | 0.085062 / 0.023109 (0.061952) | 0.371472 / 0.275898 (0.095574) | 0.412552 / 0.323480 (0.089072) | 0.004700 / 0.007986 (-0.003285) | 0.003765 / 0.004328 (-0.000564) | 0.074267 / 0.004250 (0.070017) | 0.063003 / 0.037052 (0.025951) | 0.391842 / 0.258489 (0.133353) | 0.436955 / 0.293841 (0.143114) | 0.035291 / 0.128546 (-0.093255) | 0.009309 / 0.075646 (-0.066338) | 0.313097 / 0.419271 (-0.106174) | 0.060098 / 0.043533 (0.016565) | 0.350726 / 0.255139 (0.095587) | 0.402692 / 0.283200 (0.119493) | 0.029321 / 0.141683 (-0.112361) | 1.671806 / 1.452155 (0.219651) | 1.743760 / 1.492716 (0.251044) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242281 / 0.018006 (0.224275) | 0.505054 / 0.000490 (0.504564) | 0.006595 / 0.000200 (0.006395) | 0.000091 / 0.000054 (0.000037) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032174 / 0.037411 (-0.005238) | 0.094483 / 0.014526 (0.079957) | 0.108527 / 0.176557 (-0.068030) | 0.178983 / 0.737135 (-0.558152) | 0.113766 / 0.296338 (-0.182572) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419764 / 0.215209 (0.204555) | 4.282650 / 2.077655 (2.204995) | 2.075325 / 1.504120 (0.571205) | 1.897668 / 1.541195 (0.356473) | 2.027109 / 1.468490 (0.558619) | 0.519983 / 4.584777 (-4.064794) | 4.134603 / 3.745712 (0.388891) | 6.586711 / 5.269862 (1.316849) | 3.811726 / 4.565676 (-0.753951) | 0.058628 / 0.424275 (-0.365647) | 0.007586 / 0.007607 (-0.000021) | 0.502180 / 0.226044 (0.276136) | 5.101588 / 2.268929 (2.832660) | 2.534295 / 55.444624 (-52.910330) | 2.220170 / 6.876477 (-4.656307) | 2.441110 / 2.142072 (0.299038) | 0.644775 / 4.805227 (-4.160452) | 0.144716 / 6.500664 (-6.355948) | 0.067018 / 0.075469 (-0.008451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.431279 / 1.841788 (-0.410508) | 21.947814 / 8.074308 (13.873506) | 15.548236 / 10.191392 (5.356844) | 0.174774 / 0.680424 (-0.505650) | 0.021182 / 0.534201 (-0.513019) | 0.441320 / 0.579283 (-0.137963) | 0.476685 / 0.434364 (0.042321) | 0.506277 / 0.540337 (-0.034060) | 0.809943 / 1.386936 (-0.576993) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007172 / 0.011353 (-0.004181) | 0.004358 / 0.011008 (-0.006650) | 0.068604 / 0.038508 (0.030096) | 0.083956 / 0.023109 (0.060847) | 0.402579 / 0.275898 (0.126681) | 0.444714 / 0.323480 (0.121235) | 0.005940 / 0.007986 (-0.002046) | 0.003607 / 0.004328 (-0.000722) | 0.073134 / 0.004250 (0.068883) | 0.061722 / 0.037052 (0.024669) | 0.410957 / 0.258489 (0.152468) | 0.458819 / 0.293841 (0.164978) | 0.033710 / 0.128546 (-0.094836) | 0.010230 / 0.075646 (-0.065417) | 0.084678 / 0.419271 (-0.334593) | 0.058203 / 0.043533 (0.014670) | 0.444972 / 0.255139 (0.189833) | 0.470962 / 0.283200 (0.187763) | 0.029222 / 0.141683 (-0.112461) | 1.671460 / 1.452155 (0.219306) | 1.759471 / 1.492716 (0.266754) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238894 / 0.018006 (0.220888) | 0.493605 / 0.000490 (0.493115) | 0.001979 / 0.000200 (0.001780) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036498 / 0.037411 (-0.000913) | 0.095245 / 0.014526 (0.080719) | 0.112147 / 0.176557 (-0.064409) | 0.171128 / 0.737135 (-0.566007) | 0.115295 / 0.296338 (-0.181044) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.461067 / 0.215209 (0.245858) | 4.723932 / 2.077655 (2.646277) | 2.432697 / 1.504120 (0.928578) | 2.237302 / 1.541195 (0.696107) | 2.351320 / 1.468490 (0.882830) | 0.509963 / 4.584777 (-4.074813) | 4.194817 / 3.745712 (0.449105) | 6.689529 / 5.269862 (1.419667) | 3.351198 / 4.565676 (-1.214478) | 0.064563 / 0.424275 (-0.359712) | 0.008605 / 0.007607 (0.000998) | 0.575590 / 0.226044 (0.349546) | 5.644179 / 2.268929 (3.375250) | 3.021375 / 55.444624 (-52.423249) | 2.595305 / 6.876477 (-4.281172) | 2.839228 / 2.142072 (0.697156) | 0.657148 / 4.805227 (-4.148079) | 0.144831 / 6.500664 (-6.355834) | 0.067882 / 0.075469 (-0.007587) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595580 / 1.841788 (-0.246208) | 22.431609 / 8.074308 (14.357301) | 15.700845 / 10.191392 (5.509453) | 0.164675 / 0.680424 (-0.515749) | 0.021322 / 0.534201 (-0.512879) | 0.455270 / 0.579283 (-0.124013) | 0.451547 / 0.434364 (0.017183) | 0.520955 / 0.540337 (-0.019383) | 0.687803 / 1.386936 (-0.699133) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008171 / 0.011353 (-0.003182) | 0.005563 / 0.011008 (-0.005445) | 0.102265 / 0.038508 (0.063757) | 0.074755 / 0.023109 (0.051646) | 0.431317 / 0.275898 (0.155419) | 0.472179 / 0.323480 (0.148699) | 0.006153 / 0.007986 (-0.001833) | 0.003832 / 0.004328 (-0.000496) | 0.078480 / 0.004250 (0.074230) | 0.056250 / 0.037052 (0.019197) | 0.432938 / 0.258489 (0.174449) | 0.480983 / 0.293841 (0.187142) | 0.048861 / 0.128546 (-0.079685) | 0.016252 / 0.075646 (-0.059394) | 0.343508 / 0.419271 (-0.075763) | 0.065057 / 0.043533 (0.021524) | 0.468418 / 0.255139 (0.213279) | 0.463692 / 0.283200 (0.180492) | 0.032912 / 0.141683 (-0.108771) | 1.795194 / 1.452155 (0.343039) | 1.833047 / 1.492716 (0.340331) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197980 / 0.018006 (0.179974) | 0.500662 / 0.000490 (0.500172) | 0.007380 / 0.000200 (0.007181) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028323 / 0.037411 (-0.009089) | 0.089817 / 0.014526 (0.075291) | 0.102923 / 0.176557 (-0.073633) | 0.173851 / 0.737135 (-0.563284) | 0.104006 / 0.296338 (-0.192333) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.580277 / 0.215209 (0.365068) | 5.878739 / 2.077655 (3.801085) | 2.404673 / 1.504120 (0.900553) | 2.071765 / 1.541195 (0.530571) | 2.106024 / 1.468490 (0.637534) | 0.855217 / 4.584777 (-3.729560) | 4.918602 / 3.745712 (1.172890) | 5.354984 / 5.269862 (0.085122) | 3.141288 / 4.565676 (-1.424389) | 0.099553 / 0.424275 (-0.324723) | 0.008152 / 0.007607 (0.000545) | 0.709857 / 0.226044 (0.483813) | 7.144602 / 2.268929 (4.875673) | 3.137637 / 55.444624 (-52.306987) | 2.379851 / 6.876477 (-4.496626) | 2.346426 / 2.142072 (0.204353) | 1.033416 / 4.805227 (-3.771811) | 0.213120 / 6.500664 (-6.287544) | 0.076037 / 0.075469 (0.000568) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.597742 / 1.841788 (-0.244046) | 21.745366 / 8.074308 (13.671058) | 20.830698 / 10.191392 (10.639306) | 0.238727 / 0.680424 (-0.441697) | 0.027923 / 0.534201 (-0.506278) | 0.466073 / 0.579283 (-0.113210) | 0.548647 / 0.434364 (0.114283) | 0.549245 / 0.540337 (0.008908) | 0.977148 / 1.386936 (-0.409788) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008252 / 0.011353 (-0.003101) | 0.004653 / 0.011008 (-0.006356) | 0.084012 / 0.038508 (0.045504) | 0.077418 / 0.023109 (0.054309) | 0.440748 / 0.275898 (0.164850) | 0.464279 / 0.323480 (0.140799) | 0.005762 / 0.007986 (-0.002224) | 0.004909 / 0.004328 (0.000581) | 0.086441 / 0.004250 (0.082190) | 0.057883 / 0.037052 (0.020831) | 0.466655 / 0.258489 (0.208166) | 0.479751 / 0.293841 (0.185910) | 0.047166 / 0.128546 (-0.081380) | 0.014480 / 0.075646 (-0.061166) | 0.092599 / 0.419271 (-0.326672) | 0.062454 / 0.043533 (0.018921) | 0.449753 / 0.255139 (0.194614) | 0.461876 / 0.283200 (0.178676) | 0.034828 / 0.141683 (-0.106855) | 1.752249 / 1.452155 (0.300095) | 1.865449 / 1.492716 (0.372732) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245028 / 0.018006 (0.227022) | 0.509564 / 0.000490 (0.509074) | 0.003930 / 0.000200 (0.003730) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034746 / 0.037411 (-0.002665) | 0.096563 / 0.014526 (0.082037) | 0.107581 / 0.176557 (-0.068975) | 0.184952 / 0.737135 (-0.552184) | 0.108747 / 0.296338 (-0.187591) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.613091 / 0.215209 (0.397882) | 5.994985 / 2.077655 (3.917330) | 2.711276 / 1.504120 (1.207156) | 2.415862 / 1.541195 (0.874668) | 2.391055 / 1.468490 (0.922565) | 0.868723 / 4.584777 (-3.716054) | 4.953992 / 3.745712 (1.208280) | 4.606542 / 5.269862 (-0.663319) | 2.942162 / 4.565676 (-1.623515) | 0.102737 / 0.424275 (-0.321538) | 0.008634 / 0.007607 (0.001027) | 0.722122 / 0.226044 (0.496078) | 7.245097 / 2.268929 (4.976168) | 3.428232 / 55.444624 (-52.016393) | 2.709539 / 6.876477 (-4.166938) | 2.857956 / 2.142072 (0.715884) | 1.045594 / 4.805227 (-3.759634) | 0.213344 / 6.500664 (-6.287320) | 0.073601 / 0.075469 (-0.001868) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.651954 / 1.841788 (-0.189834) | 22.458646 / 8.074308 (14.384338) | 19.583203 / 10.191392 (9.391811) | 0.246932 / 0.680424 (-0.433492) | 0.025730 / 0.534201 (-0.508471) | 0.473475 / 0.579283 (-0.105808) | 0.521411 / 0.434364 (0.087047) | 0.562038 / 0.540337 (0.021700) | 0.767673 / 1.386936 (-0.619263) |\n\n</details>\n</details>\n\n\n",
"The CI error is unrelated.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006649 / 0.011353 (-0.004703) | 0.003963 / 0.011008 (-0.007045) | 0.084564 / 0.038508 (0.046056) | 0.075668 / 0.023109 (0.052559) | 0.314233 / 0.275898 (0.038335) | 0.343320 / 0.323480 (0.019841) | 0.005405 / 0.007986 (-0.002581) | 0.003356 / 0.004328 (-0.000973) | 0.065094 / 0.004250 (0.060844) | 0.058774 / 0.037052 (0.021722) | 0.320772 / 0.258489 (0.062283) | 0.353546 / 0.293841 (0.059705) | 0.030921 / 0.128546 (-0.097625) | 0.008463 / 0.075646 (-0.067184) | 0.287490 / 0.419271 (-0.131781) | 0.053188 / 0.043533 (0.009656) | 0.324023 / 0.255139 (0.068884) | 0.337828 / 0.283200 (0.054628) | 0.024764 / 0.141683 (-0.116918) | 1.458028 / 1.452155 (0.005873) | 1.521615 / 1.492716 (0.028899) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209360 / 0.018006 (0.191353) | 0.461331 / 0.000490 (0.460841) | 0.000386 / 0.000200 (0.000186) | 0.000052 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028405 / 0.037411 (-0.009006) | 0.081074 / 0.014526 (0.066548) | 0.094868 / 0.176557 (-0.081689) | 0.151050 / 0.737135 (-0.586085) | 0.095854 / 0.296338 (-0.200484) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393957 / 0.215209 (0.178748) | 3.938649 / 2.077655 (1.860994) | 1.938190 / 1.504120 (0.434070) | 1.766458 / 1.541195 (0.225263) | 1.818028 / 1.468490 (0.349538) | 0.483926 / 4.584777 (-4.100851) | 3.641957 / 3.745712 (-0.103755) | 4.883845 / 5.269862 (-0.386016) | 2.960300 / 4.565676 (-1.605377) | 0.057227 / 0.424275 (-0.367048) | 0.007285 / 0.007607 (-0.000322) | 0.475928 / 0.226044 (0.249884) | 4.756757 / 2.268929 (2.487828) | 2.502659 / 55.444624 (-52.941966) | 2.178067 / 6.876477 (-4.698410) | 2.378298 / 2.142072 (0.236226) | 0.578639 / 4.805227 (-4.226588) | 0.132512 / 6.500664 (-6.368152) | 0.059656 / 0.075469 (-0.015813) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272673 / 1.841788 (-0.569115) | 19.266884 / 8.074308 (11.192576) | 14.272930 / 10.191392 (4.081538) | 0.165897 / 0.680424 (-0.514527) | 0.018436 / 0.534201 (-0.515765) | 0.395177 / 0.579283 (-0.184107) | 0.420134 / 0.434364 (-0.014229) | 0.460781 / 0.540337 (-0.079557) | 0.645376 / 1.386936 (-0.741560) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006504 / 0.011353 (-0.004849) | 0.003942 / 0.011008 (-0.007066) | 0.064936 / 0.038508 (0.026428) | 0.075015 / 0.023109 (0.051905) | 0.396871 / 0.275898 (0.120973) | 0.423448 / 0.323480 (0.099968) | 0.005239 / 0.007986 (-0.002747) | 0.003265 / 0.004328 (-0.001063) | 0.064910 / 0.004250 (0.060660) | 0.055006 / 0.037052 (0.017953) | 0.392818 / 0.258489 (0.134329) | 0.429735 / 0.293841 (0.135894) | 0.031847 / 0.128546 (-0.096699) | 0.008626 / 0.075646 (-0.067021) | 0.071591 / 0.419271 (-0.347681) | 0.049006 / 0.043533 (0.005473) | 0.384913 / 0.255139 (0.129774) | 0.408969 / 0.283200 (0.125769) | 0.023573 / 0.141683 (-0.118110) | 1.490271 / 1.452155 (0.038117) | 1.564620 / 1.492716 (0.071904) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225917 / 0.018006 (0.207911) | 0.450369 / 0.000490 (0.449880) | 0.000375 / 0.000200 (0.000175) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031196 / 0.037411 (-0.006215) | 0.090486 / 0.014526 (0.075960) | 0.102326 / 0.176557 (-0.074231) | 0.157483 / 0.737135 (-0.579653) | 0.103670 / 0.296338 (-0.192668) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417577 / 0.215209 (0.202368) | 4.170798 / 2.077655 (2.093143) | 2.123689 / 1.504120 (0.619569) | 1.948231 / 1.541195 (0.407037) | 2.040277 / 1.468490 (0.571787) | 0.497919 / 4.584777 (-4.086858) | 3.633270 / 3.745712 (-0.112442) | 4.851698 / 5.269862 (-0.418164) | 2.691992 / 4.565676 (-1.873684) | 0.058641 / 0.424275 (-0.365634) | 0.007719 / 0.007607 (0.000112) | 0.500652 / 0.226044 (0.274607) | 4.988657 / 2.268929 (2.719728) | 2.604488 / 55.444624 (-52.840136) | 2.329829 / 6.876477 (-4.546648) | 2.468239 / 2.142072 (0.326167) | 0.598724 / 4.805227 (-4.206503) | 0.135959 / 6.500664 (-6.364706) | 0.061088 / 0.075469 (-0.014381) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.352107 / 1.841788 (-0.489681) | 19.973976 / 8.074308 (11.899668) | 14.292812 / 10.191392 (4.101420) | 0.163855 / 0.680424 (-0.516568) | 0.018402 / 0.534201 (-0.515799) | 0.393128 / 0.579283 (-0.186155) | 0.407379 / 0.434364 (-0.026985) | 0.462324 / 0.540337 (-0.078013) | 0.607501 / 1.386936 (-0.779435) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1838/comments | https://api.github.com/repos/huggingface/datasets/issues/1838/events | https://github.com/huggingface/datasets/issues/1838 | 803,557,521 | MDU6SXNzdWU4MDM1NTc1MjE= | 1,838 | Add tedlium | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",
"default": false,
"description": "",
"id": 2725241052,
"name": "speech",
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech"
}
] | closed | false | null | 2 | 2021-02-08T13:17:52Z | 2022-10-04T14:34:12Z | 2022-10-04T14:34:12Z | null | ## Adding a Dataset
- **Name:** *tedlium*
- **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.*
- **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51/
- **Data:** http://www.openslr.org/7/
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/tedlium
If interested in tackling this issue, feel free to tag @patrickvonplaten
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1838/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1838/timeline | null | completed | null | null | false | [
"Hi @patrickvonplaten \r\nI can have a look to this dataset later since I am trying to add the OpenSLR dataset https://github.com/huggingface/datasets/pull/2173\r\nHopefully I have enough space since the compressed file is 21GB. The release 3 is even bigger: 54GB :-0",
"Resolved via https://github.com/huggingface/datasets/pull/4309"
] |
https://api.github.com/repos/huggingface/datasets/issues/679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/679/comments | https://api.github.com/repos/huggingface/datasets/issues/679/events | https://github.com/huggingface/datasets/pull/679 | 710,065,838 | MDExOlB1bGxSZXF1ZXN0NDkzOTgyMDMx | 679 | Fix negative ids when slicing with an array | [] | closed | false | null | 0 | 2020-09-28T08:39:08Z | 2020-09-28T14:42:20Z | 2020-09-28T14:42:19Z | null | ```python
from datasets import Dataset
d = ds.Dataset.from_dict({"a": range(10)})
print(d[[0, -1]])
# OverflowError
```
raises an error because of the negative id.
This PR fixes that.
Fix #668 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/679/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/679/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/679.diff",
"html_url": "https://github.com/huggingface/datasets/pull/679",
"merged_at": "2020-09-28T14:42:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/679.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/679"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/363/comments | https://api.github.com/repos/huggingface/datasets/issues/363/events | https://github.com/huggingface/datasets/pull/363 | 653,821,172 | MDExOlB1bGxSZXF1ZXN0NDQ2NjY0NDIy | 363 | Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | [] | closed | false | null | 23 | 2020-07-09T07:10:30Z | 2020-08-24T09:59:35Z | 2020-08-24T09:59:35Z | null | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py
src/nlp/arrow_writer.py
I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look!
datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py:
I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ).
For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"!
(still working on the pretraining, just wanted to push out the new functionality sooner than later) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/363/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/363/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/363.diff",
"html_url": "https://github.com/huggingface/datasets/pull/363",
"merged_at": "2020-08-24T09:59:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/363.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/363"
} | true | [
"Thank you! I just marked this as a draft PR. It probably would be better to create specific Array2D and Array3D classes as needed instead of a generic MultiArray for now, it should simplify the code a lot too so, I'll update it as such. Also i was meaning to reply earlier, but I wanted to thank you for the testing script you sent me earlier since it ended up being tremendously helpful. ",
"Okay, I just converted the MultiArray class to Array2D, and got rid of all those \"globals()\"! \r\n\r\nThe main issues I had were that when including a \"pa.ExtensionType\" as a column, the ordinary methods to batch the data would not work and it would throw me some mysterious error, so I first cleaned up my code to order the row to match the schema (because when including extension types the row is disordered ) and then made each row a pa.Table and then concatenated all the tables. Also each n-dimensional vector class we implement will be size invariant which is some good news. ",
"Okay awesome! I just added your suggestions and changed up my recursive functions. \r\n\r\nHere is the traceback for the when I use the original code in the write_on_file method:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 33, in <module>\r\n File \"/home/eltoto/nlp/src/nlp/arrow_writer.py\", line 214, in finalize\r\n self.write_on_file()\r\n File \"/home/eltoto/nlp/src/nlp/arrow_writer.py\", line 134, in write_on_file\r\n pa_array = pa.array(self.current_rows, type=self._type)\r\n File \"pyarrow/array.pxi\", line 269, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 38, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 106, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>\r\n\r\nshell returned 1\r\n```\r\n\r\nI think when trying to cast an extension array within a list of dictionaries, some method gets called that bugs out Arrow and somehow doesn't get called when adding a single row to a a table and then appending multiple tables together. I tinkered with this for a while but could not find any workaround. \r\n\r\nIn the case that this new method causes bad compression/worse performance, we can explicitly set the batch size in the pa.Table.to_batches(***batch_size***) method, which will return a list of batches. Perhaps, we can check that the batch size is not too large converting the table to batches after X many rows are appended to it by following the batch_size check below.",
"> I think when trying to cast an extension array within a list of dictionaries, some method gets called that bugs out Arrow and somehow doesn't get called when adding a single row to a a table and then appending multiple tables together. I tinkered with this for a while but could not find any workaround.\r\n\r\nIndeed that's weird.\r\n\r\n> In the case that this new method causes bad compression/worse performance, we can explicitly set the batch size in the pa.Table.to_batches(batch_size) method, which will return a list of batches. Perhaps, we can check that the batch size is not too large converting the table to batches after X many rows are appended to it by following the batch_size check below.\r\n\r\nThe argument of `pa.Table.to_batches` is not `batch_size` but `max_chunksize`, which means that right now it would have no effects (each chunk is of length 1).\r\n\r\nWe can fix that just by doing `entries.combine_chunks().to_batches(batch_size)`. In that case it would write by chunk of 1000 which is what we want. I don't think it will slow down the writing by much, but we may have to do a benchmark just to make sure. If speed is ok we could even replace the original code to always write chunks this way.\r\n\r\nDo you still have errors that need to be fixed ?",
"@lhoestq Nope all should be good! \r\n\r\nWould you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?",
"> @lhoestq Nope all should be good!\r\n\r\nAwesome :)\r\n\r\nI think it would be good to start to add some tests then.\r\nYou already have `test_multi_array.py` which is a good start, maybe you can place it in /tests and make it a `unittest.TestCase` ?\r\n\r\n> Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?\r\n\r\nThat would be interesting. We don't want reading/writing to be the bottleneck of dataset processing for example in terms of speed. Maybe we could test the write + read speed of different datasets:\r\n- write speed + read speed a dataset with `nlp.Array2D` features\r\n- write speed + read speed a dataset with `nlp.Sequence(nlp.Sequence(nlp.Value(\"float32\")))` features\r\n- write speed + read speed a dataset with `nlp.Sequence(nlp.Value(\"float32\"))` features (same data but flatten)\r\nIt will be interesting to see the influence of `.combine_chunks()` on the `Array2D` test too.\r\n\r\nWhat do you think ?",
"Well actually it looks like we're still having the `print(dataset[0])` error no ?",
"I just tested your code to try to understand better.\r\n\r\n\r\n- First thing you must know is that we've switched from `dataset._data.to_pandas` to `dataset._data.to_pydict` by default when we call `dataset[0]` in #423 . Right now it raises an error but it can be fixed by adding this method to `ExtensionArray2D`:\r\n\r\n```python\r\n def to_pylist(self):\r\n return self.to_numpy().tolist()\r\n```\r\n\r\n- Second, I noticed that `ExtensionArray2D.to_numpy()` always return a (5, 5) shape in your example. I thought `ExtensionArray` was for possibly multiple examples and so I was expecting a shape like (1, 5, 5) for example. Did I miss something ?\r\nTherefore when I apply the fix I mentioned (adding to_pylist), it returns one example per row in each image (in your example of 2 images of shape 5x5, I get `len(dataset._data.to_pydict()[\"image\"]) == 10 # True`)\r\n\r\n[EDIT] I changed the reshape step in `ExtensionArray2D.to_numpy()` by\r\n```python\r\nnumpy_arr = numpy_arr.reshape(len(self), *ExtensionArray2D._construct_shape(self.storage))\r\n```\r\nand it did the job: `len(dataset._data.to_pydict()[\"image\"]) == 2 # True`\r\n\r\n- Finally, I was able to make `to_pandas` work though, by implementing custom array dtype in pandas with arrow conversion (I got inspiration from [here](https://gist.github.com/Eastsun/a59fb0438f65e8643cd61d8c98ec4c08) and [here](https://pandas.pydata.org/pandas-docs/version/1.0.0/development/extending.html#compatibility-with-apache-arrow))\r\n\r\nMaybe you could add me in your repo so I can open a PR to add these changes to your branch ?",
"`combine_chunks` doesn't seem to work btw:\r\n`ArrowNotImplementedError: concatenation of extension<arrow.py_extension_type>`",
"> > @lhoestq Nope all should be good!\r\n> \r\n> Awesome :)\r\n> \r\n> I think it would be good to start to add some tests then.\r\n> You already have `test_multi_array.py` which is a good start, maybe you can place it in /tests and make it a `unittest.TestCase` ?\r\n> \r\n> > Would you like me to add the entries.combine_chunks().to_batch_size() code + benchmark?\r\n> \r\n> That would be interesting. We don't want reading/writing to be the bottleneck of dataset processing for example in terms of speed. Maybe we could test the write + read speed of different datasets:\r\n> \r\n> * write speed + read speed a dataset with `nlp.Array2D` features\r\n> * write speed + read speed a dataset with `nlp.Sequence(nlp.Sequence(nlp.Value(\"float32\")))` features\r\n> * write speed + read speed a dataset with `nlp.Sequence(nlp.Value(\"float32\"))` features (same data but flatten)\r\n> It will be interesting to see the influence of `.combine_chunks()` on the `Array2D` test too.\r\n> \r\n> What do you think ?\r\n\r\nYa! that should be no problem at all, Ill use the timeit module and get back to you with the results sometime over the weekend.",
"Thank you for all your help getting the pandas and row indexing for the dataset to work! For `print(dataset[0])`, I considered the workaround of doing `print(dataset[\"col_name\"][0])` a temporary solution, but ya, I was never able to figure out how to previously get it to work. I'll add you to my repo right now, let me know if you do not see the invite. Also lastly, it is strange how the to_batches method is not working, so I can check that out while I add some speed tests + add the multi dim test under the unit tests this weekend. ",
"I created the PR :)\r\nI also tested `to_batches` and it works on my side",
"Sorry for the bit of delay! I just added the tests, the PR into my fork, and some speed tests. It should be fairly easy to add more tests if we need. Do you think there is anything else to checkout?",
"Cool thanks for adding the tests :) \r\n\r\nNext step is merge master into this branch.\r\nNot sure I understand what you did in your last commit, but it looks like you discarded all the changes from master ^^'\r\n\r\nWe've done some changes in the features logic on master, so let me know if you need help merging it.\r\n\r\nAs soon as we've merged from master, we'll have to make sure that we have extensive tests and we'll be good to do !\r\nAbout the lxmert dataset, we can probably keep it for another PR as soon as we have working 2d features. What do you think ?",
"We might want to merge this after tomorrow's release though to avoid potential side effects @lhoestq ",
"Yep I'm sure we can have it not for tomorrow's release but for the next one ;)",
"haha, when I tried to rebase, I ran into some conflicts. In that last commit, I restored the features.py from the previous commit on the branch in my fork because upon updating to master, the pandasdtypemanger and pandas extension types disappeared. If you actually could help me with merging in what is needed, that would actually help a lot. \r\n\r\nOther than that, ya let me go ahead and move the dataloader code out of this PR. Perhaps we could discuss in the slack channelk soon about what to do with that because we can either just support the pretraining corpus for lxmert or try to implement the full COCO and visual genome datasets (+VQA +GQA) which im sure people would be pretty happy about. \r\n\r\nAlso we can talk more tests soon too when you are free. \r\n\r\nGoodluck on the release tomorrow guys!",
"Not sure why github thinks there are conflicts here, as I just rebased from the current master branch.\r\nMerging into master locally works on my side without conflicts\r\n```\r\ngit checkout master\r\ngit reset --hard origin/master\r\ngit merge --no-ff eltoto1219/support_multi_dim_tensors_for_images\r\nMerge made by the 'recursive' strategy.\r\n datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py | 89 +++++++++++++++++++++++++++++++++++++\r\n datasets/lxmert_pretraining_beta/test_multi_array.py | 45 +++++++++++++++++++\r\n datasets/lxmert_pretraining_beta/to_arrow_data.py | 371 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n src/nlp/arrow_dataset.py | 24 +++++-----\r\n src/nlp/arrow_writer.py | 22 ++++++++--\r\n src/nlp/features.py | 229 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---\r\n tests/test_array_2d.py | 210 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++\r\n 7 files changed, 969 insertions(+), 21 deletions(-)\r\n create mode 100644 datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py\r\n create mode 100644 datasets/lxmert_pretraining_beta/test_multi_array.py\r\n create mode 100644 datasets/lxmert_pretraining_beta/to_arrow_data.py\r\n create mode 100644 tests/test_array_2d.py\r\n```",
"I put everything inside one commit from the master branch but the merge conflicts on github'side were still there for some reason.\r\nClosing and re-opening the PR fixed the conflict check on github's side.",
"Almost done ! It still needs a pass on the docs/comments and maybe a few more tests.\r\n\r\nI had to do several changes for type inference in the ArrowWriter to make it support custom types.",
"Ok this is now ready for review ! Thanks for your awesome work in this @eltoto1219 \r\n\r\nSummary of the changes:\r\n- added new feature type `Array2D`, that can be instantiated like `Array2D(\"float32\")` for example\r\n- added pyarrow extension type `Array2DExtensionType` and array `Array2DExtensionArray` that take care of converting from and to arrow. `Array2DExtensionType`'s storage is a list of list of any pyarrow array.\r\n- added pandas extension type `PandasArrayExtensionType` and array `PandasArrayExtensionArray` for conversion from and to arrow/python objects\r\n- refactor of the `ArrowWriter` write and write_batch functions to support extension types while preserving the type inference behavior.\r\n- added a utility object `TypedSequence` that is helpful to combine extension arrays and type inference inside the writer's methods.\r\n- added speed test for sequences writing (printed as warnings in pytest)\r\n- breaking: set disable_nullable to False by default as pyarrow's type inference returns nullable fields\r\n\r\nAnd there are plenty of new tests, mainly in `test_array2d.py` and `test_arrow_writer.py`.\r\n\r\nNote that there are some collisions in `arrow_dataset.py` with #513 so let's be careful when we'll merge this one.\r\n\r\nI know this is a big PR so feel free to ask questions",
"I'll add Array3D, 4D.. tomorrow but it should take only a few lines. The rest won't change",
"I took your comments into account and I added Array[3-5]D.\r\nI changed the storage type to fixed lengths lists. I had to update the `to_numpy` function because of that. Indeed slicing a FixedLengthListArray returns a view a of the original array, while in the previous case slicing a ListArray copies the storage.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5935/comments | https://api.github.com/repos/huggingface/datasets/issues/5935/events | https://github.com/huggingface/datasets/pull/5935 | 1,748,090,220 | PR_kwDODunzps5Sh9Mg | 5,935 | Better row group size in push_to_hub | [] | closed | false | null | 10 | 2023-06-08T15:01:15Z | 2023-06-09T17:47:37Z | 2023-06-09T17:40:09Z | null | This is a very simple change that improves `to_parquet` to use a more reasonable row group size for image and audio datasets.
This is especially useful for `push_to_hub` and will provide a better experience with the dataset viewer on HF | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5935/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5935/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5935.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5935",
"merged_at": "2023-06-09T17:40:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5935.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5935"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007489 / 0.011353 (-0.003864) | 0.004914 / 0.011008 (-0.006095) | 0.111626 / 0.038508 (0.073117) | 0.037920 / 0.023109 (0.014811) | 0.350571 / 0.275898 (0.074673) | 0.389667 / 0.323480 (0.066187) | 0.006309 / 0.007986 (-0.001676) | 0.005488 / 0.004328 (0.001160) | 0.083962 / 0.004250 (0.079712) | 0.050728 / 0.037052 (0.013675) | 0.360997 / 0.258489 (0.102508) | 0.392736 / 0.293841 (0.098895) | 0.031975 / 0.128546 (-0.096571) | 0.009941 / 0.075646 (-0.065705) | 0.379840 / 0.419271 (-0.039432) | 0.056522 / 0.043533 (0.012989) | 0.359379 / 0.255139 (0.104240) | 0.384487 / 0.283200 (0.101287) | 0.117523 / 0.141683 (-0.024160) | 1.683639 / 1.452155 (0.231485) | 1.791645 / 1.492716 (0.298929) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236862 / 0.018006 (0.218856) | 0.481208 / 0.000490 (0.480719) | 0.007455 / 0.000200 (0.007255) | 0.000111 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030854 / 0.037411 (-0.006557) | 0.126892 / 0.014526 (0.112367) | 0.139207 / 0.176557 (-0.037350) | 0.206447 / 0.737135 (-0.530689) | 0.143095 / 0.296338 (-0.153244) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474677 / 0.215209 (0.259468) | 4.699534 / 2.077655 (2.621879) | 2.152102 / 1.504120 (0.647983) | 1.934815 / 1.541195 (0.393620) | 1.986448 / 1.468490 (0.517958) | 0.607184 / 4.584777 (-3.977593) | 4.480385 / 3.745712 (0.734673) | 2.074729 / 5.269862 (-3.195132) | 1.182383 / 4.565676 (-3.383294) | 0.075624 / 0.424275 (-0.348651) | 0.014046 / 0.007607 (0.006439) | 0.598859 / 0.226044 (0.372814) | 5.959551 / 2.268929 (3.690622) | 2.700851 / 55.444624 (-52.743773) | 2.303775 / 6.876477 (-4.572702) | 2.456441 / 2.142072 (0.314369) | 0.747185 / 4.805227 (-4.058042) | 0.165787 / 6.500664 (-6.334878) | 0.075817 / 0.075469 (0.000348) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.411859 / 1.841788 (-0.429928) | 17.375495 / 8.074308 (9.301187) | 15.187098 / 10.191392 (4.995706) | 0.169953 / 0.680424 (-0.510471) | 0.020204 / 0.534201 (-0.513997) | 0.461424 / 0.579283 (-0.117859) | 0.494443 / 0.434364 (0.060080) | 0.544583 / 0.540337 (0.004246) | 0.648231 / 1.386936 (-0.738705) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007785 / 0.011353 (-0.003568) | 0.005314 / 0.011008 (-0.005694) | 0.087273 / 0.038508 (0.048765) | 0.037810 / 0.023109 (0.014701) | 0.425473 / 0.275898 (0.149575) | 0.459976 / 0.323480 (0.136497) | 0.007270 / 0.007986 (-0.000716) | 0.004631 / 0.004328 (0.000303) | 0.087063 / 0.004250 (0.082812) | 0.052630 / 0.037052 (0.015578) | 0.432384 / 0.258489 (0.173895) | 0.500291 / 0.293841 (0.206450) | 0.033144 / 0.128546 (-0.095402) | 0.010101 / 0.075646 (-0.065545) | 0.096068 / 0.419271 (-0.323204) | 0.062750 / 0.043533 (0.019217) | 0.419308 / 0.255139 (0.164169) | 0.437099 / 0.283200 (0.153900) | 0.122289 / 0.141683 (-0.019394) | 1.737829 / 1.452155 (0.285674) | 1.851481 / 1.492716 (0.358765) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.014277 / 0.018006 (-0.003729) | 0.489835 / 0.000490 (0.489345) | 0.008423 / 0.000200 (0.008223) | 0.000188 / 0.000054 (0.000134) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032966 / 0.037411 (-0.004445) | 0.130069 / 0.014526 (0.115544) | 0.144372 / 0.176557 (-0.032185) | 0.200400 / 0.737135 (-0.536735) | 0.149384 / 0.296338 (-0.146954) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.511542 / 0.215209 (0.296333) | 5.093879 / 2.077655 (3.016225) | 2.572088 / 1.504120 (1.067968) | 2.339118 / 1.541195 (0.797923) | 2.441637 / 1.468490 (0.973147) | 0.614818 / 4.584777 (-3.969959) | 4.724441 / 3.745712 (0.978729) | 5.431978 / 5.269862 (0.162116) | 2.257794 / 4.565676 (-2.307883) | 0.078109 / 0.424275 (-0.346166) | 0.013821 / 0.007607 (0.006214) | 0.639232 / 0.226044 (0.413188) | 6.424623 / 2.268929 (4.155694) | 3.163018 / 55.444624 (-52.281606) | 2.756786 / 6.876477 (-4.119690) | 2.808655 / 2.142072 (0.666583) | 0.745843 / 4.805227 (-4.059385) | 0.165562 / 6.500664 (-6.335102) | 0.076610 / 0.075469 (0.001141) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.738630 / 1.841788 (-0.103158) | 18.073573 / 8.074308 (9.999265) | 16.482820 / 10.191392 (6.291428) | 0.213233 / 0.680424 (-0.467191) | 0.022839 / 0.534201 (-0.511362) | 0.487043 / 0.579283 (-0.092240) | 0.512518 / 0.434364 (0.078154) | 0.549365 / 0.540337 (0.009028) | 0.656612 / 1.386936 (-0.730324) |\n\n</details>\n</details>\n\n\n",
"Good idea!\r\n\r\nI was wondering: if we want to optimize the balance between the size of downloading a row group, and the number of rows in the group, would it make sense to compute the row group size by checking the average size of the rows?\r\n\r\neg. 32x32 images could have a larger row group size than full HD images, no? Relying on the size would even remove the need to check the column types.\r\n\r\n(in this proposal, we could use the computed row group size, eg 837, or use the nearest row group size in a list of values: 10, 100, 1000, 10000)",
"Probably, but I would go for a simpler solution first :p",
"Sure! I wanted to understand if the idea made sense or not, but it's not for this PR.",
"I think it will be more useful for people who use the viewer and won't impact sequential io that much.",
"DuckDB [paragraph](https://duckdb.org/docs/data/parquet/tips.html#selecting-a-row_group_size) that explains how to choose the `row_group_size`. Our default shard size is 500 MB in `push_to_hub`, so, ideally, we should aim for 64 MB row groups (and make this part configurable for power users ๐).\r\n\r\nSo, before merging this PR, let's add a TODO or open an issue as a reminder that this can be improved.",
"I moved the config values, improved the features check and mentioned the improvements we could do in the docstring :)",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006211 / 0.011353 (-0.005141) | 0.004244 / 0.011008 (-0.006764) | 0.097941 / 0.038508 (0.059433) | 0.028564 / 0.023109 (0.005455) | 0.299651 / 0.275898 (0.023753) | 0.340694 / 0.323480 (0.017214) | 0.005161 / 0.007986 (-0.002824) | 0.004764 / 0.004328 (0.000435) | 0.075505 / 0.004250 (0.071255) | 0.039656 / 0.037052 (0.002603) | 0.309242 / 0.258489 (0.050753) | 0.350783 / 0.293841 (0.056942) | 0.025145 / 0.128546 (-0.103401) | 0.008498 / 0.075646 (-0.067148) | 0.317657 / 0.419271 (-0.101615) | 0.043926 / 0.043533 (0.000394) | 0.305915 / 0.255139 (0.050776) | 0.331630 / 0.283200 (0.048430) | 0.088564 / 0.141683 (-0.053119) | 1.533175 / 1.452155 (0.081021) | 1.581017 / 1.492716 (0.088301) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206032 / 0.018006 (0.188025) | 0.433446 / 0.000490 (0.432956) | 0.003955 / 0.000200 (0.003755) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023468 / 0.037411 (-0.013943) | 0.103292 / 0.014526 (0.088766) | 0.107234 / 0.176557 (-0.069322) | 0.168525 / 0.737135 (-0.568610) | 0.113218 / 0.296338 (-0.183120) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431085 / 0.215209 (0.215875) | 4.302082 / 2.077655 (2.224427) | 2.068290 / 1.504120 (0.564171) | 1.850718 / 1.541195 (0.309523) | 1.964261 / 1.468490 (0.495771) | 0.547562 / 4.584777 (-4.037215) | 3.410739 / 3.745712 (-0.334974) | 1.779640 / 5.269862 (-3.490221) | 1.005466 / 4.565676 (-3.560210) | 0.066250 / 0.424275 (-0.358025) | 0.011877 / 0.007607 (0.004270) | 0.525185 / 0.226044 (0.299141) | 5.234786 / 2.268929 (2.965857) | 2.398045 / 55.444624 (-53.046580) | 2.073020 / 6.876477 (-4.803457) | 2.210753 / 2.142072 (0.068680) | 0.654897 / 4.805227 (-4.150331) | 0.134639 / 6.500664 (-6.366025) | 0.067050 / 0.075469 (-0.008419) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.180210 / 1.841788 (-0.661577) | 13.613091 / 8.074308 (5.538783) | 13.441837 / 10.191392 (3.250445) | 0.146048 / 0.680424 (-0.534376) | 0.016505 / 0.534201 (-0.517696) | 0.363210 / 0.579283 (-0.216073) | 0.405484 / 0.434364 (-0.028880) | 0.428712 / 0.540337 (-0.111625) | 0.522300 / 1.386936 (-0.864636) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006147 / 0.011353 (-0.005206) | 0.004161 / 0.011008 (-0.006847) | 0.075861 / 0.038508 (0.037353) | 0.027948 / 0.023109 (0.004839) | 0.362466 / 0.275898 (0.086568) | 0.398227 / 0.323480 (0.074747) | 0.005014 / 0.007986 (-0.002972) | 0.004772 / 0.004328 (0.000444) | 0.075674 / 0.004250 (0.071423) | 0.039158 / 0.037052 (0.002106) | 0.363567 / 0.258489 (0.105078) | 0.410378 / 0.293841 (0.116537) | 0.025510 / 0.128546 (-0.103036) | 0.008528 / 0.075646 (-0.067118) | 0.081803 / 0.419271 (-0.337468) | 0.040954 / 0.043533 (-0.002579) | 0.358492 / 0.255139 (0.103353) | 0.381345 / 0.283200 (0.098145) | 0.092347 / 0.141683 (-0.049336) | 1.567695 / 1.452155 (0.115540) | 1.668412 / 1.492716 (0.175696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203367 / 0.018006 (0.185360) | 0.424642 / 0.000490 (0.424152) | 0.002451 / 0.000200 (0.002251) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026129 / 0.037411 (-0.011282) | 0.102564 / 0.014526 (0.088039) | 0.110583 / 0.176557 (-0.065973) | 0.164332 / 0.737135 (-0.572804) | 0.115706 / 0.296338 (-0.180632) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468925 / 0.215209 (0.253716) | 4.657266 / 2.077655 (2.579612) | 2.423280 / 1.504120 (0.919160) | 2.236284 / 1.541195 (0.695089) | 2.323019 / 1.468490 (0.854529) | 0.548120 / 4.584777 (-4.036657) | 3.455602 / 3.745712 (-0.290110) | 1.730421 / 5.269862 (-3.539441) | 1.006089 / 4.565676 (-3.559588) | 0.067478 / 0.424275 (-0.356797) | 0.011465 / 0.007607 (0.003857) | 0.574235 / 0.226044 (0.348190) | 5.744404 / 2.268929 (3.475475) | 2.882225 / 55.444624 (-52.562400) | 2.618246 / 6.876477 (-4.258231) | 2.642920 / 2.142072 (0.500847) | 0.661441 / 4.805227 (-4.143787) | 0.137358 / 6.500664 (-6.363306) | 0.070372 / 0.075469 (-0.005097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.333815 / 1.841788 (-0.507973) | 14.689667 / 8.074308 (6.615359) | 14.362294 / 10.191392 (4.170902) | 0.152011 / 0.680424 (-0.528413) | 0.016869 / 0.534201 (-0.517332) | 0.370433 / 0.579283 (-0.208851) | 0.399642 / 0.434364 (-0.034722) | 0.433759 / 0.540337 (-0.106578) | 0.525443 / 1.386936 (-0.861493) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006564 / 0.011353 (-0.004789) | 0.004350 / 0.011008 (-0.006658) | 0.096277 / 0.038508 (0.057769) | 0.032956 / 0.023109 (0.009847) | 0.303675 / 0.275898 (0.027777) | 0.336384 / 0.323480 (0.012904) | 0.005789 / 0.007986 (-0.002197) | 0.003957 / 0.004328 (-0.000371) | 0.073990 / 0.004250 (0.069740) | 0.050974 / 0.037052 (0.013922) | 0.321754 / 0.258489 (0.063265) | 0.349489 / 0.293841 (0.055648) | 0.031138 / 0.128546 (-0.097409) | 0.009000 / 0.075646 (-0.066646) | 0.325445 / 0.419271 (-0.093826) | 0.070173 / 0.043533 (0.026640) | 0.304706 / 0.255139 (0.049567) | 0.321803 / 0.283200 (0.038603) | 0.109405 / 0.141683 (-0.032278) | 1.489812 / 1.452155 (0.037657) | 1.577729 / 1.492716 (0.085013) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.287187 / 0.018006 (0.269181) | 0.527625 / 0.000490 (0.527135) | 0.006533 / 0.000200 (0.006333) | 0.000090 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026659 / 0.037411 (-0.010752) | 0.106236 / 0.014526 (0.091710) | 0.118615 / 0.176557 (-0.057941) | 0.173156 / 0.737135 (-0.563979) | 0.122883 / 0.296338 (-0.173456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407189 / 0.215209 (0.191980) | 4.055732 / 2.077655 (1.978078) | 1.865594 / 1.504120 (0.361474) | 1.664325 / 1.541195 (0.123130) | 1.668961 / 1.468490 (0.200471) | 0.521207 / 4.584777 (-4.063570) | 3.740424 / 3.745712 (-0.005288) | 3.431973 / 5.269862 (-1.837889) | 1.636669 / 4.565676 (-2.929008) | 0.065271 / 0.424275 (-0.359005) | 0.012151 / 0.007607 (0.004544) | 0.514233 / 0.226044 (0.288189) | 5.110150 / 2.268929 (2.841222) | 2.264340 / 55.444624 (-53.180284) | 1.940428 / 6.876477 (-4.936049) | 2.042286 / 2.142072 (-0.099787) | 0.639200 / 4.805227 (-4.166028) | 0.139537 / 6.500664 (-6.361127) | 0.063195 / 0.075469 (-0.012274) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.179501 / 1.841788 (-0.662286) | 14.600133 / 8.074308 (6.525825) | 14.902137 / 10.191392 (4.710745) | 0.144509 / 0.680424 (-0.535915) | 0.017449 / 0.534201 (-0.516752) | 0.393135 / 0.579283 (-0.186148) | 0.413103 / 0.434364 (-0.021261) | 0.459897 / 0.540337 (-0.080440) | 0.552602 / 1.386936 (-0.834334) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006891 / 0.011353 (-0.004462) | 0.004633 / 0.011008 (-0.006375) | 0.073093 / 0.038508 (0.034585) | 0.032509 / 0.023109 (0.009399) | 0.348332 / 0.275898 (0.072434) | 0.381920 / 0.323480 (0.058440) | 0.005978 / 0.007986 (-0.002007) | 0.005360 / 0.004328 (0.001032) | 0.074307 / 0.004250 (0.070056) | 0.049668 / 0.037052 (0.012615) | 0.354713 / 0.258489 (0.096224) | 0.398521 / 0.293841 (0.104681) | 0.032013 / 0.128546 (-0.096534) | 0.008890 / 0.075646 (-0.066756) | 0.080013 / 0.419271 (-0.339259) | 0.051820 / 0.043533 (0.008288) | 0.349730 / 0.255139 (0.094591) | 0.369267 / 0.283200 (0.086067) | 0.103874 / 0.141683 (-0.037809) | 1.484148 / 1.452155 (0.031993) | 1.573927 / 1.492716 (0.081211) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009699 / 0.018006 (-0.008307) | 0.511176 / 0.000490 (0.510686) | 0.002938 / 0.000200 (0.002738) | 0.000109 / 0.000054 (0.000054) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027847 / 0.037411 (-0.009564) | 0.111565 / 0.014526 (0.097039) | 0.120625 / 0.176557 (-0.055932) | 0.172130 / 0.737135 (-0.565006) | 0.125949 / 0.296338 (-0.170389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430634 / 0.215209 (0.215424) | 4.315377 / 2.077655 (2.237722) | 2.070764 / 1.504120 (0.566644) | 1.881962 / 1.541195 (0.340767) | 1.904053 / 1.468490 (0.435563) | 0.524973 / 4.584777 (-4.059804) | 3.718359 / 3.745712 (-0.027353) | 3.415344 / 5.269862 (-1.854518) | 1.224568 / 4.565676 (-3.341108) | 0.065593 / 0.424275 (-0.358682) | 0.011643 / 0.007607 (0.004036) | 0.537050 / 0.226044 (0.311006) | 5.352155 / 2.268929 (3.083226) | 2.557361 / 55.444624 (-52.887263) | 2.217770 / 6.876477 (-4.658707) | 2.194975 / 2.142072 (0.052902) | 0.635142 / 4.805227 (-4.170085) | 0.140642 / 6.500664 (-6.360022) | 0.064690 / 0.075469 (-0.010779) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.266125 / 1.841788 (-0.575663) | 14.836413 / 8.074308 (6.762105) | 14.446870 / 10.191392 (4.255478) | 0.191545 / 0.680424 (-0.488878) | 0.017433 / 0.534201 (-0.516768) | 0.392296 / 0.579283 (-0.186987) | 0.420698 / 0.434364 (-0.013666) | 0.463225 / 0.540337 (-0.077112) | 0.556127 / 1.386936 (-0.830809) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2371/comments | https://api.github.com/repos/huggingface/datasets/issues/2371/events | https://github.com/huggingface/datasets/issues/2371 | 894,193,403 | MDU6SXNzdWU4OTQxOTM0MDM= | 2,371 | Align question answering tasks with sub-domains | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 1 | 2021-05-18T09:47:59Z | 2023-07-25T16:52:05Z | 2023-07-25T16:52:04Z | null | As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of `transformers` to account for the various types of question-answering domains:
> `question-answering` exists in two forms: abstractive and extractive question answering.
>
> we can keep a generic `question-answering` but then it will probably mean diferrent schema of input/output for both (abstractive will have text for both while extractive can use spans indication as well as text).
>
> Or we can also propose to use `abstractive-question-answering` and `extractive-question-answering` for instance.
> Maybe we could have `question-answering-abstractive` and `question-answering-extractive` if somehow we can use a for a completion or search in the future (detail).
> Actually I see that people are more organizing in terms of general and sub-tasks, for instance on paperwithcode: https://paperswithcode.com/area/natural-language-processing and on nlpprogress: https://github.com/sebastianruder/NLP-progress/blob/master/english/question_answering.md#squad
>
> Probably the best is to align with one of these in terms of denomination, PaperWithCode is probably the most active and maintained and we work with them as well.
> Maybe you want to check with a few QA datasets that this schema make sense. Typically NaturalQuestions, TriviaQA and can be good second datasets to compare to and be sure of the generality of the schema.
>
> A good recent list of QA datasets to compare the schemas among, is for instance in the UnitedQA paper: https://arxiv.org/abs/2101.00178
Investigate which grouping of QA is best suited for `datasets` and adapt / extend the QA task template accordingly. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2371/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2371/timeline | null | completed | null | null | false | [
"Closing this issue as the `task_templates` API has been deprecated."
] |
https://api.github.com/repos/huggingface/datasets/issues/4016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4016/comments | https://api.github.com/repos/huggingface/datasets/issues/4016/events | https://github.com/huggingface/datasets/pull/4016 | 1,180,557,828 | PR_kwDODunzps41AWBk | 4,016 | Support streaming blimp dataset | [] | closed | false | null | 1 | 2022-03-25T09:39:10Z | 2022-03-25T11:19:18Z | 2022-03-25T11:14:13Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4016/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4016/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4016.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4016",
"merged_at": "2022-03-25T11:14:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4016.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4016"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1083/comments | https://api.github.com/repos/huggingface/datasets/issues/1083/events | https://github.com/huggingface/datasets/pull/1083 | 756,687,101 | MDExOlB1bGxSZXF1ZXN0NTMyMTk2Nzc0 | 1,083 | Add the multilingual Exams dataset | [] | closed | false | null | 1 | 2020-12-04T00:06:04Z | 2020-12-04T17:12:00Z | 2020-12-04T17:12:00Z | null | https://github.com/mhardalov/exams-qa
`multilingual` configs have all languages mixed together
`crosslingual` mixes the languages for test but separates them for train and dec, so I've made one config per language for train/dev data and one config with the joint test set | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1083/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1083/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1083.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1083",
"merged_at": "2020-12-04T17:12:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1083.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1083"
} | true | [
"Will slim down the dummy files in the morning"
] |
https://api.github.com/repos/huggingface/datasets/issues/3002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3002/comments | https://api.github.com/repos/huggingface/datasets/issues/3002/events | https://github.com/huggingface/datasets/pull/3002 | 1,014,120,524 | PR_kwDODunzps4smCNO | 3,002 | Remove a reference to the open Arrow file when deleting a TF dataset created with to_tf_dataset | [] | closed | false | null | 2 | 2021-10-02T17:44:09Z | 2021-10-13T11:48:00Z | 2021-10-13T09:03:23Z | null | This [comment](https://github.com/huggingface/datasets/issues/2934#issuecomment-922970919) explains the issue. This PR fixes that with a `weakref` callback, and additionally:
* renames `TensorflowDatasetMixIn` to `TensorflowDatasetMixin` for consistency
* correctly indents `TensorflowDatasetMixin`'s docstring
* replaces `tf.data.AUTOTUNE` with `tf.data.experimental.AUTOTUNE` (we support TF>=2.2 according to the [setup.py](https://github.com/huggingface/datasets/blob/fc46bba66ba4f432cc10501c16a677112e13984c/setup.py#L188) and `AUTOTUNE` has been moved to the experimental part of `tf.data` in 1.X if I'm not mistaken)
Fixes #2934 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3002/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3002/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3002.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3002",
"merged_at": "2021-10-13T09:03:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3002.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3002"
} | true | [
"@lhoestq The test passes even without the try/except block!",
"Hey, I'm a little late because I was caught up in the course work, but I double-checked this and it looks great. Thanks for fixing!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4431/comments | https://api.github.com/repos/huggingface/datasets/issues/4431/events | https://github.com/huggingface/datasets/pull/4431 | 1,254,618,948 | PR_kwDODunzps44x5aG | 4,431 | Add personaldialog datasets | [] | closed | false | null | 5 | 2022-06-01T01:20:40Z | 2022-06-11T12:40:23Z | 2022-06-11T12:31:16Z | null | It seems that all tests are passed | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4431/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4431/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4431.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4431",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4431.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4431"
} | true | [
"These test errors are related to issue #4428 \r\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"I only made a trivial modification in my commit https://github.com/huggingface/datasets/pull/4431/commits/402c893d35224d7828176717233909ac5f1e7b3e\r\n\r\nI have submitted a PR #4434 for the about issue.",
"> Awesome thanks for adding this dataset :)\r\n> \r\n> I just have one comment about the licensing.\r\n> \r\n> Also it seems that you already have the dataset in https://huggingface.co/datasets/silver/personal_dialog, so it's unnecessary to add it here\r\n\r\nThank you very much for your comment.\r\n\r\nSo, should I close this PR?",
"Thanks for fixing the licensing section :)\r\n\r\n> So, should I close this PR?\r\n\r\nYes you can close this PR, it's better if your dataset is under your namespace at https://huggingface.co/datasets/silver/personal_dialog :)\r\n\r\nDon't forget to update the licensing section on https://huggingface.co/datasets/silver/personal_dialog as well"
] |
https://api.github.com/repos/huggingface/datasets/issues/5737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5737/comments | https://api.github.com/repos/huggingface/datasets/issues/5737/events | https://github.com/huggingface/datasets/issues/5737 | 1,662,919,811 | I_kwDODunzps5jHiSD | 5,737 | ClassLabel Error | [] | closed | false | null | 2 | 2023-04-11T17:14:13Z | 2023-04-13T16:49:57Z | 2023-04-13T16:49:57Z | null | ### Describe the bug
I still getting the error "call() takes 1 positional argument but 2 were given" even after ensuring that the value being passed to the label object is a single value and that the ClassLabel object has been created with the correct number of label classes
### Steps to reproduce the bug
from datasets import ClassLabel, Dataset
1. Create the ClassLabel object with 3 label values and their corresponding names
label_test = ClassLabel(num_classes=3, names=["label_1", "label_2", "label_3"])
2. Define a dictionary with text and label fields
data = {
'text': ['text_1', 'text_2', 'text_3'],
'label': [1, 2, 3],
}
3. Create a Hugging Face dataset from the dictionary
dataset = Dataset.from_dict(data)
print(dataset.features)
4. Map the label values to their corresponding label names using the label object
dataset = dataset.map(lambda example: {'text': example['text'], 'label': label_test(example['label'])})
5. Print the resulting dataset
print(dataset)
### Expected behavior
I hope my label type is class label instead int.
### Environment info
python 3.9
google colab | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5737/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5737/timeline | null | completed | null | null | false | [
"Hi, you can use the `cast_column` function to change the feature type from a `Value(int64)` to `ClassLabel`:\r\n\r\n```py\r\ndataset = dataset.cast_column(\"label\", ClassLabel(names=[\"label_1\", \"label_2\", \"label_3\"]))\r\nprint(dataset.features)\r\n{'text': Value(dtype='string', id=None),\r\n 'label': ClassLabel(names=['label_1', 'label_2', 'label_3'], id=None)}\r\n```",
"thank you @stevhliu, its worked. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1506/comments | https://api.github.com/repos/huggingface/datasets/issues/1506/events | https://github.com/huggingface/datasets/pull/1506 | 763,846,074 | MDExOlB1bGxSZXF1ZXN0NTM4MTc1ODEz | 1,506 | Add nq_open question answering dataset | [] | closed | false | null | 6 | 2020-12-12T13:46:48Z | 2020-12-17T15:34:50Z | 2020-12-17T15:34:50Z | null | Added nq_open Open-domain question answering dataset.
The NQ-Open task is currently being used to evaluate submissions to the EfficientQA competition, which is part of the NeurIPS 2020 competition track. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1506/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1506/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1506.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1506",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1506.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1506"
} | true | [
"@SBrandeis thanks for the review, I applied your suggested changes, but CI is failing now not sure about the error.",
"Many thanks @Nilanshrajput !\r\nThe failing tests on CI are not related to your changes, merging master on your branch should fix them :)\r\nIf you're interested in what causes the CI to fail, checkout [this commit](https://github.com/huggingface/datasets/commit/9a0f1e20ca1e783cb14c1ab1cc2f54b0b5b201e8)",
"@SBrandeis done!\r\n",
"Hello @Nilanshrajput, your PR includes changes from other branches too now (485 files changed)\r\nWould you mind creating another branch from master with your changes and opening a new PR?",
"@SBrandeis sorry i messed up the git history, #1587 I opened this new pr! ",
"closing in favor of #1587 "
] |
https://api.github.com/repos/huggingface/datasets/issues/4474 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4474/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4474/comments | https://api.github.com/repos/huggingface/datasets/issues/4474/events | https://github.com/huggingface/datasets/pull/4474 | 1,267,767,541 | PR_kwDODunzps45en98 | 4,474 | [Docs] How to use with PyTorch page | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-06-10T16:25:49Z | 2022-06-14T14:40:32Z | 2022-06-14T14:04:33Z | null | Currently the docs about PyTorch are scattered around different pages, and we were missing a place to explain more in depth how to use and optimize a dataset for PyTorch. This PR is related to #4457 which is the TF counterpart :)
cc @Rocketknight1 we can try to align both documentations contents now I think
cc @stevhliu let me know what you think ! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4474/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4474/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4474.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4474",
"merged_at": "2022-06-14T14:04:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4474.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4474"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/110 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/110/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/110/comments | https://api.github.com/repos/huggingface/datasets/issues/110/events | https://github.com/huggingface/datasets/pull/110 | 618,520,325 | MDExOlB1bGxSZXF1ZXN0NDE4MjMzODIy | 110 | fix reddit tifu dummy data | [] | closed | false | null | 0 | 2020-05-14T20:37:37Z | 2020-05-14T20:40:14Z | 2020-05-14T20:40:13Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/110/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/110/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/110.diff",
"html_url": "https://github.com/huggingface/datasets/pull/110",
"merged_at": "2020-05-14T20:40:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/110.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/110"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/5016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5016/comments | https://api.github.com/repos/huggingface/datasets/issues/5016/events | https://github.com/huggingface/datasets/pull/5016 | 1,383,883,058 | PR_kwDODunzps4_gKny | 5,016 | Fix tar extraction vuln | [] | closed | false | null | 1 | 2022-09-23T14:22:21Z | 2022-09-29T12:42:26Z | 2022-09-29T12:40:28Z | null | Fix for CVE-2007-4559
Description:
Directory traversal vulnerability in the (1) extract and (2) extractall functions in the tarfile
module in Python allows user-assisted remote attackers to overwrite arbitrary files via a .. (dot dot)
sequence in filenames in a TAR archive, a related issue to CVE-2001-1267.
I fixed it by using the solution proposed in https://stackoverflow.com/questions/10060069/safely-extract-zip-or-tar-using-python
It blocks extraction of files with an absolute path or double dots and symlinks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5016/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5016/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5016.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5016",
"merged_at": "2022-09-29T12:40:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5016.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5016"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5752/comments | https://api.github.com/repos/huggingface/datasets/issues/5752/events | https://github.com/huggingface/datasets/issues/5752 | 1,668,574,209 | I_kwDODunzps5jdGwB | 5,752 | Streaming dataset looses `.feature` method after `.add_column` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 1 | 2023-04-14T16:39:50Z | 2023-04-14T17:46:54Z | null | null | ### Describe the bug
After appending a new column to a streaming dataset using `.add_column`, we can no longer access the list of dataset features using the `.feature` method.
### Steps to reproduce the bug
```python
from datasets import load_dataset
original_dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
print(original_dataset.features.keys())
# now add a new column to our streaming dataset
modified_dataset = original_dataset.add_column("new_column", ["some random text" for _ in range(50)])
print(modified_dataset.features.keys())
```
**Print Output:**
```
dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[1], line 8
6 # now add a new column to our streaming dataset
7 modified_dataset = original_dataset.add_column("new_column", ["some random text" for _ in range(50)])
----> 8 print(modified_dataset.features.keys())
AttributeError: 'NoneType' object has no attribute 'keys'
```
We see that we get the features for the original dataset, but not the modified one with the added column.
### Expected behavior
Features should be persevered after adding a new column, i.e. calling:
```python
print(modified_dataset.features.keys())
```
Should return:
```
dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id', 'new_column'])
```
### Environment info
- `datasets` version: 2.10.2.dev0
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.13.3
- PyArrow version: 10.0.1
- Pandas version: 1.5.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5752/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5752/timeline | null | null | null | null | false | [
"I believe the issue resides in this line:\r\nhttps://github.com/huggingface/datasets/blob/7c3a9b057c476c40d157bd7a5d57f49066239df0/src/datasets/iterable_dataset.py#L1415\r\n\r\nIf we pass the **new** features of the dataset to the `.map` method we can return the features after adding a column, e.g.:\r\n```python\r\nfrom datasets import load_dataset, Value\r\n\r\noriginal_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"validation\", streaming=True)\r\nprint(original_dataset.features.keys())\r\n\r\n# now add a new column to our streaming dataset using our hack\r\nname = \"new_column\"\r\ncolumn = [\"some random text\" for _ in range(50)]\r\n\r\nnew_features = original_dataset.features.copy()\r\nnew_features[name] = Value(dtype=\"string\", id=None) #ย I know the correct column type for this feature\r\n\r\ndef add_column_fn(example, idx):\r\n if name in example:\r\n raise ValueError(f\"Error when adding {name}: column {name} is already in the dataset.\")\r\n return {name: column[idx]}\r\n\r\nmodified_dataset = original_dataset.map(add_column_fn, with_indices=True, features=new_features)\r\n\r\nprint(modified_dataset.features.keys())\r\n```\r\n**Print Output:**\r\n```\r\ndict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])\r\ndict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id', 'new_column'])\r\n```\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1258/comments | https://api.github.com/repos/huggingface/datasets/issues/1258/events | https://github.com/huggingface/datasets/pull/1258 | 758,557,169 | MDExOlB1bGxSZXF1ZXN0NTMzNzExOTQz | 1,258 | arXiv dataset added | [] | closed | false | null | 1 | 2020-12-07T14:23:33Z | 2020-12-08T14:07:15Z | 2020-12-08T14:07:15Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1258/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1258/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1258.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1258",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1258.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1258"
} | true | [
"Need help"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/1699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1699/comments | https://api.github.com/repos/huggingface/datasets/issues/1699/events | https://github.com/huggingface/datasets/pull/1699 | 781,271,558 | MDExOlB1bGxSZXF1ZXN0NTUxMDIzODE5 | 1,699 | Update DBRD dataset card and download URL | [] | closed | false | null | 1 | 2021-01-07T12:16:43Z | 2021-01-07T13:41:39Z | 2021-01-07T13:40:59Z | null | I've added the Dutch Bood Review Dataset (DBRD) during the recent sprint. This pull request makes two minor changes:
1. I'm changing the download URL from Google Drive to the dataset's GitHub release package. This is now possible because of PR #1316.
2. I've updated the dataset card.
Cheers! ๐ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1699/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1699/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1699.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1699",
"merged_at": "2021-01-07T13:40:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1699.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1699"
} | true | [
"not sure why the CI was not triggered though"
] |
https://api.github.com/repos/huggingface/datasets/issues/3917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3917/comments | https://api.github.com/repos/huggingface/datasets/issues/3917/events | https://github.com/huggingface/datasets/pull/3917 | 1,168,906,154 | PR_kwDODunzps40bGZA | 3,917 | Create README.md | [] | closed | false | null | 1 | 2022-03-14T21:08:10Z | 2022-03-17T17:45:39Z | 2022-03-17T17:45:39Z | null | This follows the same structure as the GLUE metric card, hope that works for everyone :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3917/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3917/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3917.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3917",
"merged_at": "2022-03-17T17:45:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3917.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3917"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3917). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/3894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3894/comments | https://api.github.com/repos/huggingface/datasets/issues/3894/events | https://github.com/huggingface/datasets/pull/3894 | 1,166,611,270 | PR_kwDODunzps40TzXW | 3,894 | [docs] make dummy data creation optional | [] | closed | false | null | 3 | 2022-03-11T16:21:34Z | 2022-03-11T17:27:56Z | 2022-03-11T17:27:55Z | null | Related to #3507 : dummy data for datasets created on the Hugging Face Hub are optional.
We can discuss later to make them optional for datasets in this repository as well | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3894/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3894/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3894.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3894",
"merged_at": "2022-03-11T17:27:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3894.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3894"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3894). All of your documentation changes will be reflected on that endpoint.",
"The dev doc build rendering doesn't seem to be updated with my last commit for some reason",
"Merging it anyway since I'd like to share this page with users ๐ "
] |
https://api.github.com/repos/huggingface/datasets/issues/3055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3055/comments | https://api.github.com/repos/huggingface/datasets/issues/3055/events | https://github.com/huggingface/datasets/issues/3055 | 1,022,319,238 | I_kwDODunzps4871qG | 3,055 | CI test suite fails after meteor metric update | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-10-11T06:37:12Z | 2021-10-11T07:30:31Z | 2021-10-11T07:30:31Z | null | ## Describe the bug
CI test suite fails: https://app.circleci.com/pipelines/github/huggingface/datasets/8110/workflows/f059ba43-9154-4632-bebb-82318447ddc9/jobs/50010
Stack trace:
```
___________________ LocalMetricTest.test_load_metric_meteor ____________________
[gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6
self = <tests.test_metric_common.LocalMetricTest testMethod=test_load_metric_meteor>
metric_name = 'meteor'
def test_load_metric(self, metric_name):
doctest.ELLIPSIS_MARKER = "[...]"
metric_module = importlib.import_module(datasets.load.prepare_module(os.path.join("metrics", metric_name))[0])
metric = datasets.load.import_main_class(metric_module.__name__, dataset=False)
# check parameters
parameters = inspect.signature(metric._compute).parameters
self.assertTrue("predictions" in parameters)
self.assertTrue("references" in parameters)
self.assertTrue(all([p.kind != p.VAR_KEYWORD for p in parameters.values()])) # no **kwargs
# run doctest
with self.patch_intensive_calls(metric_name, metric_module.__name__):
with self.use_local_metrics():
> results = doctest.testmod(metric_module, verbose=True, raise_on_error=True)
tests/test_metric_common.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1951: in testmod
runner.run(test)
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1839: in run
r = DocTestRunner.run(self, test, compileflags, out, False)
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1476: in run
return self.__run(test, compileflags, out)
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1382: in __run
exception)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <doctest.DebugRunner object at 0x7f4c26bd3da0>
out = <built-in method write of _io.TextIOWrapper object at 0x7f51a21852d0>
test = <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Mete...ets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)>
example = <doctest.Example object at 0x7f4c26bd3eb8>
exc_info = (<class 'TypeError'>, TypeError('"hypothesis" expects pre-tokenized hypothesis (Iterable[str]): It is a guide to action which ensures that the military always obeys the commands of the party',), <traceback object at 0x7f4cd01afec8>)
def report_unexpected_exception(self, out, test, example, exc_info):
> raise UnexpectedException(test, example, exc_info)
E doctest.UnexpectedException: <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Meteor from /tmp/pytest-of-circleci/pytest-0/popen-gw1/cache/modules/datasets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)>
../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1845: UnexpectedException
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3055/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3055/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5455 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5455/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5455/comments | https://api.github.com/repos/huggingface/datasets/issues/5455/events | https://github.com/huggingface/datasets/pull/5455 | 1,553,040,080 | PR_kwDODunzps5IUvAZ | 5,455 | Single TQDM bar in multi-proc map | [] | closed | false | null | 12 | 2023-01-23T12:49:40Z | 2023-02-13T20:23:34Z | 2023-02-13T20:16:38Z | null | Use the "shard generator approach with periodic progress updates" (used in `save_to_disk` and multi-proc `load_dataset`) in `Dataset.map` to enable having a single TQDM progress bar in the multi-proc mode.
Closes https://github.com/huggingface/datasets/issues/771, closes https://github.com/huggingface/datasets/issues/3177
TODO:
- [x] cleaner refactor of the `_map_single` decorators now that they also have to wrap generator functions (decorate `map` instead of `map_single` with the `transmit_` decorators and predict the shards' fingerprint in `map`) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5455/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5455/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5455.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5455",
"merged_at": "2023-02-13T20:16:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5455.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5455"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008372 / 0.011353 (-0.002981) | 0.004658 / 0.011008 (-0.006350) | 0.102005 / 0.038508 (0.063497) | 0.029030 / 0.023109 (0.005920) | 0.296968 / 0.275898 (0.021070) | 0.364898 / 0.323480 (0.041418) | 0.006899 / 0.007986 (-0.001087) | 0.003410 / 0.004328 (-0.000919) | 0.079705 / 0.004250 (0.075455) | 0.034265 / 0.037052 (-0.002787) | 0.305695 / 0.258489 (0.047206) | 0.343275 / 0.293841 (0.049434) | 0.033783 / 0.128546 (-0.094763) | 0.011604 / 0.075646 (-0.064042) | 0.322577 / 0.419271 (-0.096694) | 0.040540 / 0.043533 (-0.002993) | 0.299176 / 0.255139 (0.044037) | 0.333157 / 0.283200 (0.049957) | 0.087460 / 0.141683 (-0.054223) | 1.494392 / 1.452155 (0.042237) | 1.539580 / 1.492716 (0.046863) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.176206 / 0.018006 (0.158200) | 0.413702 / 0.000490 (0.413212) | 0.002625 / 0.000200 (0.002425) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023886 / 0.037411 (-0.013525) | 0.099758 / 0.014526 (0.085232) | 0.104349 / 0.176557 (-0.072208) | 0.147138 / 0.737135 (-0.589998) | 0.108682 / 0.296338 (-0.187657) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.411957 / 0.215209 (0.196748) | 4.110004 / 2.077655 (2.032349) | 1.820951 / 1.504120 (0.316831) | 1.629726 / 1.541195 (0.088532) | 1.672573 / 1.468490 (0.204083) | 0.686627 / 4.584777 (-3.898150) | 3.382665 / 3.745712 (-0.363047) | 2.875908 / 5.269862 (-2.393954) | 1.475331 / 4.565676 (-3.090345) | 0.081353 / 0.424275 (-0.342922) | 0.012521 / 0.007607 (0.004914) | 0.516226 / 0.226044 (0.290182) | 5.157658 / 2.268929 (2.888729) | 2.302012 / 55.444624 (-53.142612) | 1.950831 / 6.876477 (-4.925646) | 1.962081 / 2.142072 (-0.179992) | 0.800007 / 4.805227 (-4.005221) | 0.148462 / 6.500664 (-6.352202) | 0.064448 / 0.075469 (-0.011021) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.227977 / 1.841788 (-0.613810) | 13.776087 / 8.074308 (5.701779) | 13.749825 / 10.191392 (3.558433) | 0.137034 / 0.680424 (-0.543390) | 0.028461 / 0.534201 (-0.505740) | 0.392335 / 0.579283 (-0.186948) | 0.397404 / 0.434364 (-0.036960) | 0.450831 / 0.540337 (-0.089507) | 0.533716 / 1.386936 (-0.853220) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006883 / 0.011353 (-0.004470) | 0.004625 / 0.011008 (-0.006383) | 0.099039 / 0.038508 (0.060531) | 0.028068 / 0.023109 (0.004958) | 0.419988 / 0.275898 (0.144090) | 0.449543 / 0.323480 (0.126063) | 0.005232 / 0.007986 (-0.002753) | 0.003527 / 0.004328 (-0.000801) | 0.076308 / 0.004250 (0.072057) | 0.040523 / 0.037052 (0.003471) | 0.420165 / 0.258489 (0.161676) | 0.463220 / 0.293841 (0.169379) | 0.032368 / 0.128546 (-0.096178) | 0.011784 / 0.075646 (-0.063863) | 0.320675 / 0.419271 (-0.098597) | 0.041861 / 0.043533 (-0.001672) | 0.424903 / 0.255139 (0.169764) | 0.443528 / 0.283200 (0.160328) | 0.090869 / 0.141683 (-0.050814) | 1.504757 / 1.452155 (0.052602) | 1.557824 / 1.492716 (0.065108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224020 / 0.018006 (0.206014) | 0.404090 / 0.000490 (0.403601) | 0.000403 / 0.000200 (0.000203) | 0.000058 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024556 / 0.037411 (-0.012855) | 0.101280 / 0.014526 (0.086754) | 0.108017 / 0.176557 (-0.068540) | 0.146679 / 0.737135 (-0.590456) | 0.111468 / 0.296338 (-0.184870) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478955 / 0.215209 (0.263746) | 4.769628 / 2.077655 (2.691973) | 2.473238 / 1.504120 (0.969118) | 2.263588 / 1.541195 (0.722393) | 2.285425 / 1.468490 (0.816935) | 0.699051 / 4.584777 (-3.885726) | 3.390495 / 3.745712 (-0.355217) | 1.858569 / 5.269862 (-3.411293) | 1.162081 / 4.565676 (-3.403596) | 0.083294 / 0.424275 (-0.340981) | 0.012410 / 0.007607 (0.004803) | 0.580786 / 0.226044 (0.354741) | 5.866868 / 2.268929 (3.597940) | 2.944358 / 55.444624 (-52.500266) | 2.596241 / 6.876477 (-4.280235) | 2.664464 / 2.142072 (0.522392) | 0.806751 / 4.805227 (-3.998476) | 0.152389 / 6.500664 (-6.348275) | 0.066945 / 0.075469 (-0.008524) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.290545 / 1.841788 (-0.551243) | 14.005727 / 8.074308 (5.931419) | 14.478951 / 10.191392 (4.287559) | 0.127488 / 0.680424 (-0.552935) | 0.016929 / 0.534201 (-0.517272) | 0.378380 / 0.579283 (-0.200904) | 0.387499 / 0.434364 (-0.046865) | 0.440816 / 0.540337 (-0.099522) | 0.525794 / 1.386936 (-0.861142) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008704 / 0.011353 (-0.002649) | 0.004474 / 0.011008 (-0.006534) | 0.101720 / 0.038508 (0.063212) | 0.030426 / 0.023109 (0.007317) | 0.298944 / 0.275898 (0.023046) | 0.371491 / 0.323480 (0.048011) | 0.007042 / 0.007986 (-0.000944) | 0.003479 / 0.004328 (-0.000850) | 0.078086 / 0.004250 (0.073835) | 0.037014 / 0.037052 (-0.000038) | 0.312964 / 0.258489 (0.054475) | 0.351251 / 0.293841 (0.057410) | 0.033286 / 0.128546 (-0.095260) | 0.011468 / 0.075646 (-0.064179) | 0.321784 / 0.419271 (-0.097488) | 0.040700 / 0.043533 (-0.002832) | 0.303799 / 0.255139 (0.048660) | 0.336982 / 0.283200 (0.053782) | 0.089448 / 0.141683 (-0.052235) | 1.462430 / 1.452155 (0.010275) | 1.524448 / 1.492716 (0.031732) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178390 / 0.018006 (0.160384) | 0.402474 / 0.000490 (0.401984) | 0.002697 / 0.000200 (0.002497) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022679 / 0.037411 (-0.014733) | 0.097759 / 0.014526 (0.083234) | 0.105102 / 0.176557 (-0.071454) | 0.140720 / 0.737135 (-0.596415) | 0.109119 / 0.296338 (-0.187219) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414153 / 0.215209 (0.198944) | 4.131799 / 2.077655 (2.054144) | 1.852325 / 1.504120 (0.348205) | 1.646955 / 1.541195 (0.105760) | 1.662880 / 1.468490 (0.194390) | 0.693823 / 4.584777 (-3.890954) | 3.378843 / 3.745712 (-0.366869) | 1.861324 / 5.269862 (-3.408538) | 1.156916 / 4.565676 (-3.408761) | 0.082385 / 0.424275 (-0.341890) | 0.012166 / 0.007607 (0.004559) | 0.528690 / 0.226044 (0.302646) | 5.286388 / 2.268929 (3.017459) | 2.319941 / 55.444624 (-53.124684) | 1.959462 / 6.876477 (-4.917014) | 1.995102 / 2.142072 (-0.146970) | 0.817158 / 4.805227 (-3.988069) | 0.149479 / 6.500664 (-6.351185) | 0.065668 / 0.075469 (-0.009801) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.240228 / 1.841788 (-0.601560) | 13.770357 / 8.074308 (5.696048) | 13.940638 / 10.191392 (3.749246) | 0.152589 / 0.680424 (-0.527835) | 0.028498 / 0.534201 (-0.505703) | 0.392579 / 0.579283 (-0.186704) | 0.402843 / 0.434364 (-0.031521) | 0.455429 / 0.540337 (-0.084909) | 0.541090 / 1.386936 (-0.845846) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006692 / 0.011353 (-0.004661) | 0.004514 / 0.011008 (-0.006495) | 0.097058 / 0.038508 (0.058550) | 0.027780 / 0.023109 (0.004671) | 0.415806 / 0.275898 (0.139908) | 0.443079 / 0.323480 (0.119599) | 0.005181 / 0.007986 (-0.002805) | 0.003408 / 0.004328 (-0.000921) | 0.075263 / 0.004250 (0.071013) | 0.038169 / 0.037052 (0.001116) | 0.417292 / 0.258489 (0.158803) | 0.461875 / 0.293841 (0.168034) | 0.032280 / 0.128546 (-0.096266) | 0.011571 / 0.075646 (-0.064075) | 0.319091 / 0.419271 (-0.100181) | 0.048295 / 0.043533 (0.004762) | 0.423619 / 0.255139 (0.168480) | 0.435064 / 0.283200 (0.151864) | 0.094869 / 0.141683 (-0.046814) | 1.523000 / 1.452155 (0.070846) | 1.583097 / 1.492716 (0.090381) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214326 / 0.018006 (0.196320) | 0.391623 / 0.000490 (0.391134) | 0.004602 / 0.000200 (0.004403) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024306 / 0.037411 (-0.013106) | 0.101178 / 0.014526 (0.086652) | 0.108504 / 0.176557 (-0.068053) | 0.144114 / 0.737135 (-0.593022) | 0.111088 / 0.296338 (-0.185250) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472573 / 0.215209 (0.257364) | 4.748929 / 2.077655 (2.671274) | 2.441602 / 1.504120 (0.937482) | 2.238841 / 1.541195 (0.697647) | 2.303303 / 1.468490 (0.834813) | 0.696618 / 4.584777 (-3.888159) | 3.373867 / 3.745712 (-0.371845) | 2.809009 / 5.269862 (-2.460852) | 1.337240 / 4.565676 (-3.228437) | 0.082682 / 0.424275 (-0.341593) | 0.012834 / 0.007607 (0.005227) | 0.569686 / 0.226044 (0.343642) | 5.723407 / 2.268929 (3.454478) | 2.882944 / 55.444624 (-52.561680) | 2.543530 / 6.876477 (-4.332947) | 2.581856 / 2.142072 (0.439784) | 0.802353 / 4.805227 (-4.002874) | 0.149947 / 6.500664 (-6.350717) | 0.065865 / 0.075469 (-0.009604) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.282146 / 1.841788 (-0.559642) | 13.831344 / 8.074308 (5.757036) | 14.081550 / 10.191392 (3.890157) | 0.141735 / 0.680424 (-0.538689) | 0.016677 / 0.534201 (-0.517524) | 0.378967 / 0.579283 (-0.200316) | 0.383775 / 0.434364 (-0.050589) | 0.432892 / 0.540337 (-0.107446) | 0.518042 / 1.386936 (-0.868894) |\n\n</details>\n</details>\n\n\n",
"Omg I love this ! cc @TevenLeScao @thomasw21 this will save your terminals from infinite streams of progress bars",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008680 / 0.011353 (-0.002673) | 0.004597 / 0.011008 (-0.006411) | 0.101154 / 0.038508 (0.062646) | 0.029831 / 0.023109 (0.006722) | 0.300619 / 0.275898 (0.024721) | 0.358259 / 0.323480 (0.034779) | 0.007284 / 0.007986 (-0.000701) | 0.003511 / 0.004328 (-0.000817) | 0.078805 / 0.004250 (0.074555) | 0.037192 / 0.037052 (0.000140) | 0.307241 / 0.258489 (0.048752) | 0.354648 / 0.293841 (0.060807) | 0.033696 / 0.128546 (-0.094851) | 0.011660 / 0.075646 (-0.063986) | 0.324266 / 0.419271 (-0.095006) | 0.043393 / 0.043533 (-0.000140) | 0.297503 / 0.255139 (0.042364) | 0.326037 / 0.283200 (0.042838) | 0.091165 / 0.141683 (-0.050517) | 1.479970 / 1.452155 (0.027816) | 1.508507 / 1.492716 (0.015791) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.179995 / 0.018006 (0.161989) | 0.464282 / 0.000490 (0.463793) | 0.003953 / 0.000200 (0.003753) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022696 / 0.037411 (-0.014715) | 0.099510 / 0.014526 (0.084984) | 0.103741 / 0.176557 (-0.072816) | 0.137837 / 0.737135 (-0.599299) | 0.108776 / 0.296338 (-0.187563) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417034 / 0.215209 (0.201825) | 4.183479 / 2.077655 (2.105824) | 1.855329 / 1.504120 (0.351209) | 1.660675 / 1.541195 (0.119481) | 1.723936 / 1.468490 (0.255446) | 0.687815 / 4.584777 (-3.896962) | 3.331280 / 3.745712 (-0.414432) | 2.821430 / 5.269862 (-2.448432) | 1.542394 / 4.565676 (-3.023283) | 0.081665 / 0.424275 (-0.342610) | 0.012483 / 0.007607 (0.004875) | 0.524758 / 0.226044 (0.298713) | 5.277285 / 2.268929 (3.008357) | 2.278067 / 55.444624 (-53.166557) | 1.923232 / 6.876477 (-4.953245) | 1.978645 / 2.142072 (-0.163428) | 0.806225 / 4.805227 (-3.999002) | 0.147568 / 6.500664 (-6.353096) | 0.064206 / 0.075469 (-0.011263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.175079 / 1.841788 (-0.666708) | 13.677443 / 8.074308 (5.603135) | 14.064103 / 10.191392 (3.872711) | 0.167462 / 0.680424 (-0.512962) | 0.028677 / 0.534201 (-0.505524) | 0.399090 / 0.579283 (-0.180193) | 0.398930 / 0.434364 (-0.035433) | 0.461604 / 0.540337 (-0.078733) | 0.540978 / 1.386936 (-0.845958) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006846 / 0.011353 (-0.004507) | 0.004452 / 0.011008 (-0.006556) | 0.076169 / 0.038508 (0.037661) | 0.028290 / 0.023109 (0.005181) | 0.341105 / 0.275898 (0.065207) | 0.381465 / 0.323480 (0.057986) | 0.005038 / 0.007986 (-0.002948) | 0.003298 / 0.004328 (-0.001031) | 0.075794 / 0.004250 (0.071544) | 0.039225 / 0.037052 (0.002173) | 0.342995 / 0.258489 (0.084506) | 0.384878 / 0.293841 (0.091037) | 0.031766 / 0.128546 (-0.096780) | 0.011597 / 0.075646 (-0.064049) | 0.084849 / 0.419271 (-0.334423) | 0.041795 / 0.043533 (-0.001737) | 0.341770 / 0.255139 (0.086631) | 0.383142 / 0.283200 (0.099942) | 0.088854 / 0.141683 (-0.052829) | 1.465116 / 1.452155 (0.012961) | 1.566888 / 1.492716 (0.074171) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225129 / 0.018006 (0.207123) | 0.394290 / 0.000490 (0.393801) | 0.000397 / 0.000200 (0.000197) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025492 / 0.037411 (-0.011919) | 0.100494 / 0.014526 (0.085968) | 0.110587 / 0.176557 (-0.065969) | 0.142715 / 0.737135 (-0.594420) | 0.110962 / 0.296338 (-0.185376) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437240 / 0.215209 (0.222031) | 4.379191 / 2.077655 (2.301536) | 2.055059 / 1.504120 (0.550939) | 1.844643 / 1.541195 (0.303448) | 1.914678 / 1.468490 (0.446188) | 0.695607 / 4.584777 (-3.889170) | 3.353845 / 3.745712 (-0.391867) | 1.837403 / 5.269862 (-3.432459) | 1.155518 / 4.565676 (-3.410158) | 0.082753 / 0.424275 (-0.341523) | 0.012812 / 0.007607 (0.005205) | 0.537304 / 0.226044 (0.311260) | 5.387425 / 2.268929 (3.118497) | 2.506986 / 55.444624 (-52.937638) | 2.159031 / 6.876477 (-4.717445) | 2.187844 / 2.142072 (0.045772) | 0.796880 / 4.805227 (-4.008347) | 0.151850 / 6.500664 (-6.348815) | 0.067577 / 0.075469 (-0.007892) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257779 / 1.841788 (-0.584009) | 13.968842 / 8.074308 (5.894534) | 13.544220 / 10.191392 (3.352828) | 0.149962 / 0.680424 (-0.530462) | 0.016875 / 0.534201 (-0.517326) | 0.394714 / 0.579283 (-0.184570) | 0.387845 / 0.434364 (-0.046519) | 0.481674 / 0.540337 (-0.058664) | 0.569820 / 1.386936 (-0.817116) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009745 / 0.011353 (-0.001607) | 0.005307 / 0.011008 (-0.005702) | 0.104230 / 0.038508 (0.065722) | 0.039745 / 0.023109 (0.016635) | 0.306102 / 0.275898 (0.030204) | 0.384390 / 0.323480 (0.060910) | 0.008265 / 0.007986 (0.000279) | 0.005516 / 0.004328 (0.001187) | 0.076023 / 0.004250 (0.071772) | 0.048266 / 0.037052 (0.011213) | 0.315380 / 0.258489 (0.056891) | 0.365735 / 0.293841 (0.071895) | 0.038222 / 0.128546 (-0.090324) | 0.012397 / 0.075646 (-0.063249) | 0.348964 / 0.419271 (-0.070307) | 0.047668 / 0.043533 (0.004135) | 0.301037 / 0.255139 (0.045898) | 0.322982 / 0.283200 (0.039783) | 0.109307 / 0.141683 (-0.032376) | 1.420777 / 1.452155 (-0.031378) | 1.468290 / 1.492716 (-0.024426) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.262386 / 0.018006 (0.244380) | 0.557151 / 0.000490 (0.556661) | 0.000352 / 0.000200 (0.000152) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029508 / 0.037411 (-0.007903) | 0.113960 / 0.014526 (0.099434) | 0.123176 / 0.176557 (-0.053381) | 0.161928 / 0.737135 (-0.575207) | 0.129196 / 0.296338 (-0.167142) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.407051 / 0.215209 (0.191842) | 4.072550 / 2.077655 (1.994895) | 1.899809 / 1.504120 (0.395689) | 1.751981 / 1.541195 (0.210786) | 1.841361 / 1.468490 (0.372871) | 0.713908 / 4.584777 (-3.870869) | 3.703339 / 3.745712 (-0.042373) | 2.091283 / 5.269862 (-3.178578) | 1.323810 / 4.565676 (-3.241866) | 0.084691 / 0.424275 (-0.339584) | 0.012685 / 0.007607 (0.005078) | 0.511301 / 0.226044 (0.285257) | 5.109741 / 2.268929 (2.840813) | 2.315073 / 55.444624 (-53.129551) | 2.012746 / 6.876477 (-4.863731) | 2.160074 / 2.142072 (0.018002) | 0.853025 / 4.805227 (-3.952202) | 0.165301 / 6.500664 (-6.335363) | 0.062244 / 0.075469 (-0.013225) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219727 / 1.841788 (-0.622061) | 15.319675 / 8.074308 (7.245367) | 13.100883 / 10.191392 (2.909491) | 0.173451 / 0.680424 (-0.506973) | 0.029173 / 0.534201 (-0.505028) | 0.440162 / 0.579283 (-0.139122) | 0.429771 / 0.434364 (-0.004593) | 0.518689 / 0.540337 (-0.021648) | 0.608590 / 1.386936 (-0.778346) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007839 / 0.011353 (-0.003514) | 0.005409 / 0.011008 (-0.005599) | 0.076468 / 0.038508 (0.037960) | 0.036568 / 0.023109 (0.013459) | 0.337568 / 0.275898 (0.061670) | 0.379353 / 0.323480 (0.055873) | 0.006208 / 0.007986 (-0.001778) | 0.005971 / 0.004328 (0.001643) | 0.073765 / 0.004250 (0.069514) | 0.056609 / 0.037052 (0.019556) | 0.344578 / 0.258489 (0.086089) | 0.405249 / 0.293841 (0.111408) | 0.037652 / 0.128546 (-0.090894) | 0.012549 / 0.075646 (-0.063097) | 0.087086 / 0.419271 (-0.332186) | 0.056669 / 0.043533 (0.013136) | 0.334121 / 0.255139 (0.078983) | 0.354582 / 0.283200 (0.071383) | 0.113293 / 0.141683 (-0.028390) | 1.437327 / 1.452155 (-0.014828) | 1.574400 / 1.492716 (0.081684) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.325235 / 0.018006 (0.307229) | 0.535405 / 0.000490 (0.534915) | 0.014119 / 0.000200 (0.013919) | 0.000278 / 0.000054 (0.000224) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030826 / 0.037411 (-0.006585) | 0.114077 / 0.014526 (0.099552) | 0.128799 / 0.176557 (-0.047758) | 0.172164 / 0.737135 (-0.564971) | 0.133665 / 0.296338 (-0.162673) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430898 / 0.215209 (0.215689) | 4.285507 / 2.077655 (2.207853) | 2.089767 / 1.504120 (0.585647) | 1.899457 / 1.541195 (0.358262) | 2.042875 / 1.468490 (0.574385) | 0.690575 / 4.584777 (-3.894202) | 3.815905 / 3.745712 (0.070192) | 3.371085 / 5.269862 (-1.898776) | 1.865748 / 4.565676 (-2.699929) | 0.086678 / 0.424275 (-0.337597) | 0.013172 / 0.007607 (0.005565) | 0.552038 / 0.226044 (0.325994) | 5.275093 / 2.268929 (3.006165) | 2.561102 / 55.444624 (-52.883522) | 2.224235 / 6.876477 (-4.652242) | 2.330315 / 2.142072 (0.188243) | 0.845163 / 4.805227 (-3.960064) | 0.170675 / 6.500664 (-6.329989) | 0.068446 / 0.075469 (-0.007023) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.261213 / 1.841788 (-0.580575) | 15.354959 / 8.074308 (7.280651) | 15.034302 / 10.191392 (4.842910) | 0.146704 / 0.680424 (-0.533720) | 0.017986 / 0.534201 (-0.516215) | 0.425978 / 0.579283 (-0.153305) | 0.421806 / 0.434364 (-0.012558) | 0.494844 / 0.540337 (-0.045493) | 0.587870 / 1.386936 (-0.799066) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012765 / 0.011353 (0.001412) | 0.006429 / 0.011008 (-0.004579) | 0.133669 / 0.038508 (0.095161) | 0.041420 / 0.023109 (0.018311) | 0.419990 / 0.275898 (0.144092) | 0.505218 / 0.323480 (0.181738) | 0.010189 / 0.007986 (0.002204) | 0.005134 / 0.004328 (0.000805) | 0.100890 / 0.004250 (0.096640) | 0.045639 / 0.037052 (0.008587) | 0.440593 / 0.258489 (0.182103) | 0.476966 / 0.293841 (0.183125) | 0.059270 / 0.128546 (-0.069276) | 0.018625 / 0.075646 (-0.057021) | 0.444957 / 0.419271 (0.025686) | 0.060669 / 0.043533 (0.017136) | 0.415373 / 0.255139 (0.160234) | 0.461810 / 0.283200 (0.178610) | 0.116119 / 0.141683 (-0.025564) | 1.873691 / 1.452155 (0.421536) | 1.939891 / 1.492716 (0.447175) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259529 / 0.018006 (0.241523) | 0.587213 / 0.000490 (0.586723) | 0.003729 / 0.000200 (0.003529) | 0.000115 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032064 / 0.037411 (-0.005347) | 0.140228 / 0.014526 (0.125702) | 0.147139 / 0.176557 (-0.029417) | 0.193731 / 0.737135 (-0.543405) | 0.162126 / 0.296338 (-0.134213) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.639262 / 0.215209 (0.424053) | 6.496491 / 2.077655 (4.418836) | 2.602044 / 1.504120 (1.097924) | 2.245891 / 1.541195 (0.704696) | 2.301321 / 1.468490 (0.832831) | 1.234088 / 4.584777 (-3.350689) | 5.883315 / 3.745712 (2.137603) | 3.166902 / 5.269862 (-2.102959) | 2.258279 / 4.565676 (-2.307398) | 0.146203 / 0.424275 (-0.278072) | 0.015490 / 0.007607 (0.007883) | 0.800188 / 0.226044 (0.574144) | 8.150866 / 2.268929 (5.881938) | 3.419508 / 55.444624 (-52.025117) | 2.712174 / 6.876477 (-4.164302) | 2.805059 / 2.142072 (0.662987) | 1.421047 / 4.805227 (-3.384180) | 0.254274 / 6.500664 (-6.246390) | 0.083886 / 0.075469 (0.008417) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.651962 / 1.841788 (-0.189826) | 19.453202 / 8.074308 (11.378894) | 24.643881 / 10.191392 (14.452489) | 0.263612 / 0.680424 (-0.416812) | 0.046913 / 0.534201 (-0.487288) | 0.579861 / 0.579283 (0.000578) | 0.695137 / 0.434364 (0.260773) | 0.705479 / 0.540337 (0.165142) | 0.806073 / 1.386936 (-0.580863) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010384 / 0.011353 (-0.000969) | 0.007460 / 0.011008 (-0.003548) | 0.107830 / 0.038508 (0.069322) | 0.036792 / 0.023109 (0.013682) | 0.469585 / 0.275898 (0.193687) | 0.521278 / 0.323480 (0.197798) | 0.007472 / 0.007986 (-0.000513) | 0.007774 / 0.004328 (0.003446) | 0.105405 / 0.004250 (0.101154) | 0.053732 / 0.037052 (0.016680) | 0.486299 / 0.258489 (0.227810) | 0.537067 / 0.293841 (0.243226) | 0.053378 / 0.128546 (-0.075168) | 0.022018 / 0.075646 (-0.053628) | 0.127765 / 0.419271 (-0.291507) | 0.063844 / 0.043533 (0.020311) | 0.479724 / 0.255139 (0.224585) | 0.511243 / 0.283200 (0.228043) | 0.123223 / 0.141683 (-0.018460) | 1.934167 / 1.452155 (0.482013) | 2.003168 / 1.492716 (0.510451) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227670 / 0.018006 (0.209664) | 0.609125 / 0.000490 (0.608635) | 0.004408 / 0.000200 (0.004208) | 0.000147 / 0.000054 (0.000092) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035905 / 0.037411 (-0.001506) | 0.142207 / 0.014526 (0.127681) | 0.154749 / 0.176557 (-0.021808) | 0.216191 / 0.737135 (-0.520944) | 0.156577 / 0.296338 (-0.139761) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.665085 / 0.215209 (0.449876) | 6.510923 / 2.077655 (4.433269) | 2.902438 / 1.504120 (1.398318) | 2.561427 / 1.541195 (1.020232) | 2.669556 / 1.468490 (1.201066) | 1.190340 / 4.584777 (-3.394437) | 5.933066 / 3.745712 (2.187354) | 5.627784 / 5.269862 (0.357922) | 2.971922 / 4.565676 (-1.593755) | 0.140884 / 0.424275 (-0.283391) | 0.015382 / 0.007607 (0.007775) | 0.810441 / 0.226044 (0.584396) | 8.255538 / 2.268929 (5.986609) | 3.819014 / 55.444624 (-51.625611) | 3.222479 / 6.876477 (-3.653998) | 3.181700 / 2.142072 (1.039627) | 1.483403 / 4.805227 (-3.321824) | 0.262726 / 6.500664 (-6.237939) | 0.090252 / 0.075469 (0.014783) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.748566 / 1.841788 (-0.093222) | 19.566894 / 8.074308 (11.492586) | 24.382155 / 10.191392 (14.190763) | 0.260118 / 0.680424 (-0.420305) | 0.028725 / 0.534201 (-0.505476) | 0.564875 / 0.579283 (-0.014408) | 0.666708 / 0.434364 (0.232344) | 0.691165 / 0.540337 (0.150827) | 0.837061 / 1.386936 (-0.549875) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010098 / 0.011353 (-0.001255) | 0.005797 / 0.011008 (-0.005211) | 0.111262 / 0.038508 (0.072754) | 0.039687 / 0.023109 (0.016578) | 0.331081 / 0.275898 (0.055183) | 0.395878 / 0.323480 (0.072398) | 0.009244 / 0.007986 (0.001259) | 0.004498 / 0.004328 (0.000170) | 0.086129 / 0.004250 (0.081879) | 0.046662 / 0.037052 (0.009610) | 0.361926 / 0.258489 (0.103437) | 0.386155 / 0.293841 (0.092314) | 0.043657 / 0.128546 (-0.084889) | 0.013545 / 0.075646 (-0.062101) | 0.383735 / 0.419271 (-0.035537) | 0.055727 / 0.043533 (0.012194) | 0.355356 / 0.255139 (0.100217) | 0.358749 / 0.283200 (0.075550) | 0.123219 / 0.141683 (-0.018463) | 1.707982 / 1.452155 (0.255828) | 1.773342 / 1.492716 (0.280626) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238902 / 0.018006 (0.220896) | 0.495525 / 0.000490 (0.495036) | 0.001742 / 0.000200 (0.001542) | 0.000096 / 0.000054 (0.000041) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031276 / 0.037411 (-0.006135) | 0.124286 / 0.014526 (0.109760) | 0.136236 / 0.176557 (-0.040321) | 0.180257 / 0.737135 (-0.556879) | 0.141047 / 0.296338 (-0.155292) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.465075 / 0.215209 (0.249865) | 4.543997 / 2.077655 (2.466342) | 2.036632 / 1.504120 (0.532512) | 1.820356 / 1.541195 (0.279161) | 1.860692 / 1.468490 (0.392202) | 0.807549 / 4.584777 (-3.777227) | 4.400369 / 3.745712 (0.654657) | 2.423372 / 5.269862 (-2.846490) | 1.741338 / 4.565676 (-2.824339) | 0.099457 / 0.424275 (-0.324818) | 0.014464 / 0.007607 (0.006857) | 0.599442 / 0.226044 (0.373398) | 5.867798 / 2.268929 (3.598870) | 2.641859 / 55.444624 (-52.802766) | 2.294246 / 6.876477 (-4.582231) | 2.329639 / 2.142072 (0.187567) | 0.981897 / 4.805227 (-3.823331) | 0.189278 / 6.500664 (-6.311386) | 0.071868 / 0.075469 (-0.003601) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.471800 / 1.841788 (-0.369988) | 17.149150 / 8.074308 (9.074841) | 15.818942 / 10.191392 (5.627550) | 0.174760 / 0.680424 (-0.505664) | 0.033507 / 0.534201 (-0.500694) | 0.511055 / 0.579283 (-0.068228) | 0.517107 / 0.434364 (0.082743) | 0.650813 / 0.540337 (0.110476) | 0.752515 / 1.386936 (-0.634421) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008651 / 0.011353 (-0.002702) | 0.005935 / 0.011008 (-0.005073) | 0.088589 / 0.038508 (0.050081) | 0.038796 / 0.023109 (0.015687) | 0.415430 / 0.275898 (0.139532) | 0.443693 / 0.323480 (0.120213) | 0.006631 / 0.007986 (-0.001354) | 0.004638 / 0.004328 (0.000309) | 0.085779 / 0.004250 (0.081529) | 0.053994 / 0.037052 (0.016942) | 0.408349 / 0.258489 (0.149860) | 0.475441 / 0.293841 (0.181600) | 0.042792 / 0.128546 (-0.085754) | 0.013938 / 0.075646 (-0.061709) | 0.102173 / 0.419271 (-0.317098) | 0.057940 / 0.043533 (0.014407) | 0.408967 / 0.255139 (0.153828) | 0.422741 / 0.283200 (0.139541) | 0.121844 / 0.141683 (-0.019839) | 1.772779 / 1.452155 (0.320625) | 1.837706 / 1.492716 (0.344989) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228896 / 0.018006 (0.210890) | 0.497964 / 0.000490 (0.497475) | 0.004402 / 0.000200 (0.004202) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035626 / 0.037411 (-0.001786) | 0.132021 / 0.014526 (0.117495) | 0.145599 / 0.176557 (-0.030957) | 0.192317 / 0.737135 (-0.544818) | 0.150165 / 0.296338 (-0.146174) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.500216 / 0.215209 (0.285007) | 5.002916 / 2.077655 (2.925262) | 2.502439 / 1.504120 (0.998319) | 2.353019 / 1.541195 (0.811825) | 2.485082 / 1.468490 (1.016592) | 0.827694 / 4.584777 (-3.757083) | 4.569319 / 3.745712 (0.823607) | 3.739820 / 5.269862 (-1.530042) | 2.097857 / 4.565676 (-2.467819) | 0.098636 / 0.424275 (-0.325639) | 0.014608 / 0.007607 (0.007001) | 0.604411 / 0.226044 (0.378366) | 6.131702 / 2.268929 (3.862774) | 3.043988 / 55.444624 (-52.400637) | 2.642427 / 6.876477 (-4.234050) | 2.687223 / 2.142072 (0.545151) | 0.968808 / 4.805227 (-3.836419) | 0.193876 / 6.500664 (-6.306788) | 0.076931 / 0.075469 (0.001462) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.511820 / 1.841788 (-0.329968) | 17.971574 / 8.074308 (9.897265) | 16.512738 / 10.191392 (6.321346) | 0.223702 / 0.680424 (-0.456722) | 0.020191 / 0.534201 (-0.514010) | 0.511045 / 0.579283 (-0.068238) | 0.499813 / 0.434364 (0.065449) | 0.642147 / 0.540337 (0.101810) | 0.756029 / 1.386936 (-0.630907) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008909 / 0.011353 (-0.002444) | 0.005096 / 0.011008 (-0.005912) | 0.098568 / 0.038508 (0.060060) | 0.034548 / 0.023109 (0.011438) | 0.294762 / 0.275898 (0.018864) | 0.366093 / 0.323480 (0.042613) | 0.007476 / 0.007986 (-0.000510) | 0.003982 / 0.004328 (-0.000347) | 0.075975 / 0.004250 (0.071725) | 0.040499 / 0.037052 (0.003446) | 0.315050 / 0.258489 (0.056561) | 0.351273 / 0.293841 (0.057433) | 0.038327 / 0.128546 (-0.090219) | 0.011943 / 0.075646 (-0.063703) | 0.332148 / 0.419271 (-0.087124) | 0.047648 / 0.043533 (0.004115) | 0.295817 / 0.255139 (0.040678) | 0.322704 / 0.283200 (0.039504) | 0.100830 / 0.141683 (-0.040853) | 1.422162 / 1.452155 (-0.029993) | 1.468972 / 1.492716 (-0.023744) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201164 / 0.018006 (0.183158) | 0.435425 / 0.000490 (0.434935) | 0.001576 / 0.000200 (0.001376) | 0.000218 / 0.000054 (0.000163) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026667 / 0.037411 (-0.010744) | 0.106161 / 0.014526 (0.091636) | 0.115836 / 0.176557 (-0.060720) | 0.151511 / 0.737135 (-0.585624) | 0.122248 / 0.296338 (-0.174091) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395974 / 0.215209 (0.180765) | 3.952958 / 2.077655 (1.875303) | 1.772111 / 1.504120 (0.267991) | 1.581370 / 1.541195 (0.040175) | 1.602811 / 1.468490 (0.134321) | 0.694072 / 4.584777 (-3.890705) | 3.640238 / 3.745712 (-0.105474) | 2.028865 / 5.269862 (-3.240997) | 1.419182 / 4.565676 (-3.146495) | 0.084078 / 0.424275 (-0.340197) | 0.012248 / 0.007607 (0.004641) | 0.499768 / 0.226044 (0.273723) | 4.997449 / 2.268929 (2.728521) | 2.280711 / 55.444624 (-53.163913) | 1.971701 / 6.876477 (-4.904776) | 1.983248 / 2.142072 (-0.158824) | 0.831030 / 4.805227 (-3.974198) | 0.163008 / 6.500664 (-6.337656) | 0.061887 / 0.075469 (-0.013582) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.191744 / 1.841788 (-0.650043) | 14.424546 / 8.074308 (6.350238) | 14.530127 / 10.191392 (4.338735) | 0.165793 / 0.680424 (-0.514631) | 0.029099 / 0.534201 (-0.505102) | 0.447830 / 0.579283 (-0.131453) | 0.441036 / 0.434364 (0.006672) | 0.554697 / 0.540337 (0.014360) | 0.668854 / 1.386936 (-0.718082) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006825 / 0.011353 (-0.004528) | 0.004998 / 0.011008 (-0.006010) | 0.074197 / 0.038508 (0.035689) | 0.032381 / 0.023109 (0.009272) | 0.335745 / 0.275898 (0.059847) | 0.360474 / 0.323480 (0.036994) | 0.005420 / 0.007986 (-0.002566) | 0.005121 / 0.004328 (0.000792) | 0.074980 / 0.004250 (0.070730) | 0.046392 / 0.037052 (0.009340) | 0.338693 / 0.258489 (0.080204) | 0.383679 / 0.293841 (0.089838) | 0.035380 / 0.128546 (-0.093166) | 0.012197 / 0.075646 (-0.063449) | 0.085738 / 0.419271 (-0.333533) | 0.049990 / 0.043533 (0.006458) | 0.342640 / 0.255139 (0.087501) | 0.355139 / 0.283200 (0.071939) | 0.102992 / 0.141683 (-0.038690) | 1.451900 / 1.452155 (-0.000254) | 1.550919 / 1.492716 (0.058202) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223241 / 0.018006 (0.205235) | 0.436954 / 0.000490 (0.436464) | 0.003319 / 0.000200 (0.003120) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028042 / 0.037411 (-0.009370) | 0.106079 / 0.014526 (0.091554) | 0.122713 / 0.176557 (-0.053843) | 0.156543 / 0.737135 (-0.580593) | 0.122424 / 0.296338 (-0.173914) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439482 / 0.215209 (0.224273) | 4.283112 / 2.077655 (2.205457) | 2.139705 / 1.504120 (0.635585) | 1.940898 / 1.541195 (0.399703) | 2.003906 / 1.468490 (0.535416) | 0.703269 / 4.584777 (-3.881508) | 3.780391 / 3.745712 (0.034679) | 2.079963 / 5.269862 (-3.189898) | 1.330669 / 4.565676 (-3.235007) | 0.086582 / 0.424275 (-0.337693) | 0.012497 / 0.007607 (0.004890) | 0.519329 / 0.226044 (0.293284) | 5.218117 / 2.268929 (2.949189) | 2.635982 / 55.444624 (-52.808643) | 2.301111 / 6.876477 (-4.575366) | 2.341312 / 2.142072 (0.199239) | 0.840157 / 4.805227 (-3.965070) | 0.166174 / 6.500664 (-6.334490) | 0.062890 / 0.075469 (-0.012579) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.257672 / 1.841788 (-0.584116) | 14.983374 / 8.074308 (6.909066) | 14.284441 / 10.191392 (4.093049) | 0.176077 / 0.680424 (-0.504347) | 0.017544 / 0.534201 (-0.516657) | 0.429619 / 0.579283 (-0.149664) | 0.426371 / 0.434364 (-0.007993) | 0.534832 / 0.540337 (-0.005506) | 0.643322 / 1.386936 (-0.743614) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010622 / 0.011353 (-0.000731) | 0.005856 / 0.011008 (-0.005152) | 0.108608 / 0.038508 (0.070100) | 0.039868 / 0.023109 (0.016759) | 0.327853 / 0.275898 (0.051955) | 0.396721 / 0.323480 (0.073241) | 0.008916 / 0.007986 (0.000930) | 0.004590 / 0.004328 (0.000261) | 0.085020 / 0.004250 (0.080770) | 0.046608 / 0.037052 (0.009555) | 0.356369 / 0.258489 (0.097880) | 0.391142 / 0.293841 (0.097301) | 0.040579 / 0.128546 (-0.087967) | 0.012249 / 0.075646 (-0.063397) | 0.387740 / 0.419271 (-0.031532) | 0.057794 / 0.043533 (0.014262) | 0.335763 / 0.255139 (0.080624) | 0.369847 / 0.283200 (0.086647) | 0.121276 / 0.141683 (-0.020407) | 1.605406 / 1.452155 (0.153251) | 1.709524 / 1.492716 (0.216808) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226688 / 0.018006 (0.208681) | 0.493320 / 0.000490 (0.492831) | 0.002825 / 0.000200 (0.002626) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031874 / 0.037411 (-0.005538) | 0.117365 / 0.014526 (0.102840) | 0.127697 / 0.176557 (-0.048859) | 0.175589 / 0.737135 (-0.561546) | 0.137731 / 0.296338 (-0.158608) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.472563 / 0.215209 (0.257354) | 4.744383 / 2.077655 (2.666728) | 2.152015 / 1.504120 (0.647895) | 1.925398 / 1.541195 (0.384203) | 2.054613 / 1.468490 (0.586123) | 0.821703 / 4.584777 (-3.763074) | 4.468177 / 3.745712 (0.722465) | 4.687682 / 5.269862 (-0.582179) | 2.379674 / 4.565676 (-2.186003) | 0.101325 / 0.424275 (-0.322950) | 0.014891 / 0.007607 (0.007284) | 0.593161 / 0.226044 (0.367117) | 5.641670 / 2.268929 (3.372741) | 2.460206 / 55.444624 (-52.984419) | 2.131148 / 6.876477 (-4.745329) | 2.351067 / 2.142072 (0.208994) | 0.997634 / 4.805227 (-3.807593) | 0.195338 / 6.500664 (-6.305326) | 0.075540 / 0.075469 (0.000071) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.411585 / 1.841788 (-0.430203) | 17.055689 / 8.074308 (8.981381) | 16.544028 / 10.191392 (6.352636) | 0.180840 / 0.680424 (-0.499584) | 0.034549 / 0.534201 (-0.499652) | 0.510256 / 0.579283 (-0.069027) | 0.525632 / 0.434364 (0.091268) | 0.601206 / 0.540337 (0.060868) | 0.668468 / 1.386936 (-0.718469) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008989 / 0.011353 (-0.002364) | 0.006065 / 0.011008 (-0.004943) | 0.088294 / 0.038508 (0.049786) | 0.040404 / 0.023109 (0.017295) | 0.405622 / 0.275898 (0.129724) | 0.454519 / 0.323480 (0.131039) | 0.006919 / 0.007986 (-0.001067) | 0.004545 / 0.004328 (0.000217) | 0.087023 / 0.004250 (0.082772) | 0.055962 / 0.037052 (0.018910) | 0.400942 / 0.258489 (0.142453) | 0.490670 / 0.293841 (0.196829) | 0.044086 / 0.128546 (-0.084461) | 0.014485 / 0.075646 (-0.061162) | 0.103333 / 0.419271 (-0.315938) | 0.059663 / 0.043533 (0.016130) | 0.404944 / 0.255139 (0.149805) | 0.425763 / 0.283200 (0.142563) | 0.123989 / 0.141683 (-0.017694) | 1.777244 / 1.452155 (0.325089) | 1.879884 / 1.492716 (0.387167) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226440 / 0.018006 (0.208434) | 0.492688 / 0.000490 (0.492198) | 0.004691 / 0.000200 (0.004491) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035123 / 0.037411 (-0.002288) | 0.134288 / 0.014526 (0.119762) | 0.145542 / 0.176557 (-0.031015) | 0.195372 / 0.737135 (-0.541764) | 0.152551 / 0.296338 (-0.143787) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.468615 / 0.215209 (0.253406) | 4.813363 / 2.077655 (2.735708) | 2.333606 / 1.504120 (0.829486) | 2.107344 / 1.541195 (0.566149) | 2.109109 / 1.468490 (0.640619) | 0.783779 / 4.584777 (-3.800998) | 4.521448 / 3.745712 (0.775736) | 2.290532 / 5.269862 (-2.979329) | 1.553488 / 4.565676 (-3.012189) | 0.088786 / 0.424275 (-0.335489) | 0.013091 / 0.007607 (0.005484) | 0.567165 / 0.226044 (0.341120) | 5.974315 / 2.268929 (3.705386) | 2.815018 / 55.444624 (-52.629606) | 2.488954 / 6.876477 (-4.387522) | 2.461849 / 2.142072 (0.319776) | 0.934487 / 4.805227 (-3.870740) | 0.190209 / 6.500664 (-6.310455) | 0.074811 / 0.075469 (-0.000658) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.513476 / 1.841788 (-0.328311) | 17.902599 / 8.074308 (9.828291) | 14.308027 / 10.191392 (4.116635) | 0.201992 / 0.680424 (-0.478432) | 0.018678 / 0.534201 (-0.515523) | 0.454707 / 0.579283 (-0.124576) | 0.470643 / 0.434364 (0.036279) | 0.612534 / 0.540337 (0.072197) | 0.685773 / 1.386936 (-0.701163) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009385 / 0.011353 (-0.001968) | 0.005220 / 0.011008 (-0.005788) | 0.098722 / 0.038508 (0.060214) | 0.035382 / 0.023109 (0.012273) | 0.297114 / 0.275898 (0.021216) | 0.371443 / 0.323480 (0.047963) | 0.008070 / 0.007986 (0.000084) | 0.004204 / 0.004328 (-0.000125) | 0.075621 / 0.004250 (0.071370) | 0.046015 / 0.037052 (0.008963) | 0.304569 / 0.258489 (0.046080) | 0.345598 / 0.293841 (0.051757) | 0.037946 / 0.128546 (-0.090600) | 0.011972 / 0.075646 (-0.063674) | 0.331993 / 0.419271 (-0.087279) | 0.047250 / 0.043533 (0.003717) | 0.296588 / 0.255139 (0.041449) | 0.316070 / 0.283200 (0.032870) | 0.108211 / 0.141683 (-0.033472) | 1.447619 / 1.452155 (-0.004535) | 1.481243 / 1.492716 (-0.011473) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274860 / 0.018006 (0.256854) | 0.503139 / 0.000490 (0.502649) | 0.003598 / 0.000200 (0.003398) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026752 / 0.037411 (-0.010660) | 0.109008 / 0.014526 (0.094482) | 0.119109 / 0.176557 (-0.057448) | 0.158462 / 0.737135 (-0.578673) | 0.126171 / 0.296338 (-0.170168) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396396 / 0.215209 (0.181187) | 3.963055 / 2.077655 (1.885400) | 1.796308 / 1.504120 (0.292188) | 1.600565 / 1.541195 (0.059370) | 1.742409 / 1.468490 (0.273919) | 0.690942 / 4.584777 (-3.893835) | 3.713343 / 3.745712 (-0.032369) | 2.066804 / 5.269862 (-3.203058) | 1.292946 / 4.565676 (-3.272730) | 0.084344 / 0.424275 (-0.339931) | 0.012473 / 0.007607 (0.004865) | 0.513109 / 0.226044 (0.287065) | 5.175141 / 2.268929 (2.906213) | 2.266559 / 55.444624 (-53.178066) | 1.935737 / 6.876477 (-4.940740) | 2.028911 / 2.142072 (-0.113161) | 0.831191 / 4.805227 (-3.974036) | 0.163155 / 6.500664 (-6.337509) | 0.063414 / 0.075469 (-0.012055) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.195429 / 1.841788 (-0.646358) | 15.257933 / 8.074308 (7.183625) | 14.358815 / 10.191392 (4.167423) | 0.152677 / 0.680424 (-0.527747) | 0.028890 / 0.534201 (-0.505311) | 0.455342 / 0.579283 (-0.123941) | 0.442602 / 0.434364 (0.008238) | 0.526833 / 0.540337 (-0.013505) | 0.618296 / 1.386936 (-0.768640) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007613 / 0.011353 (-0.003740) | 0.005515 / 0.011008 (-0.005493) | 0.073759 / 0.038508 (0.035251) | 0.033944 / 0.023109 (0.010835) | 0.347764 / 0.275898 (0.071866) | 0.371143 / 0.323480 (0.047664) | 0.005997 / 0.007986 (-0.001988) | 0.004322 / 0.004328 (-0.000006) | 0.073002 / 0.004250 (0.068751) | 0.053051 / 0.037052 (0.015999) | 0.340345 / 0.258489 (0.081856) | 0.383761 / 0.293841 (0.089920) | 0.037734 / 0.128546 (-0.090813) | 0.012815 / 0.075646 (-0.062831) | 0.086998 / 0.419271 (-0.332273) | 0.050165 / 0.043533 (0.006632) | 0.343864 / 0.255139 (0.088725) | 0.356734 / 0.283200 (0.073534) | 0.108955 / 0.141683 (-0.032728) | 1.464558 / 1.452155 (0.012403) | 1.560084 / 1.492716 (0.067368) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.327885 / 0.018006 (0.309878) | 0.515515 / 0.000490 (0.515025) | 0.000439 / 0.000200 (0.000239) | 0.000059 / 0.000054 (0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030741 / 0.037411 (-0.006670) | 0.107634 / 0.014526 (0.093108) | 0.127121 / 0.176557 (-0.049436) | 0.164044 / 0.737135 (-0.573092) | 0.129097 / 0.296338 (-0.167242) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435690 / 0.215209 (0.220481) | 4.350705 / 2.077655 (2.273050) | 2.199597 / 1.504120 (0.695477) | 2.022715 / 1.541195 (0.481521) | 2.265907 / 1.468490 (0.797417) | 0.695817 / 4.584777 (-3.888960) | 3.795207 / 3.745712 (0.049494) | 3.061587 / 5.269862 (-2.208274) | 1.872213 / 4.565676 (-2.693463) | 0.085265 / 0.424275 (-0.339010) | 0.012243 / 0.007607 (0.004636) | 0.547209 / 0.226044 (0.321164) | 5.383626 / 2.268929 (3.114698) | 2.707439 / 55.444624 (-52.737185) | 2.393773 / 6.876477 (-4.482703) | 2.481385 / 2.142072 (0.339312) | 0.826169 / 4.805227 (-3.979059) | 0.166643 / 6.500664 (-6.334021) | 0.065817 / 0.075469 (-0.009652) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.274469 / 1.841788 (-0.567318) | 15.565025 / 8.074308 (7.490717) | 14.254192 / 10.191392 (4.062800) | 0.166785 / 0.680424 (-0.513639) | 0.017830 / 0.534201 (-0.516371) | 0.430406 / 0.579283 (-0.148877) | 0.435655 / 0.434364 (0.001292) | 0.530605 / 0.540337 (-0.009732) | 0.636355 / 1.386936 (-0.750581) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008466 / 0.011353 (-0.002887) | 0.004679 / 0.011008 (-0.006329) | 0.100534 / 0.038508 (0.062025) | 0.029513 / 0.023109 (0.006403) | 0.302866 / 0.275898 (0.026968) | 0.352816 / 0.323480 (0.029336) | 0.006912 / 0.007986 (-0.001074) | 0.003513 / 0.004328 (-0.000815) | 0.078625 / 0.004250 (0.074375) | 0.036725 / 0.037052 (-0.000327) | 0.312135 / 0.258489 (0.053646) | 0.344579 / 0.293841 (0.050738) | 0.033870 / 0.128546 (-0.094677) | 0.011563 / 0.075646 (-0.064083) | 0.318982 / 0.419271 (-0.100290) | 0.043002 / 0.043533 (-0.000531) | 0.301956 / 0.255139 (0.046817) | 0.330798 / 0.283200 (0.047599) | 0.091755 / 0.141683 (-0.049927) | 1.458577 / 1.452155 (0.006422) | 1.532642 / 1.492716 (0.039926) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194853 / 0.018006 (0.176847) | 0.396844 / 0.000490 (0.396354) | 0.004401 / 0.000200 (0.004201) | 0.000076 / 0.000054 (0.000022) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022971 / 0.037411 (-0.014441) | 0.096595 / 0.014526 (0.082069) | 0.106104 / 0.176557 (-0.070452) | 0.144815 / 0.737135 (-0.592320) | 0.110036 / 0.296338 (-0.186303) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415025 / 0.215209 (0.199816) | 4.138136 / 2.077655 (2.060481) | 1.861253 / 1.504120 (0.357133) | 1.653420 / 1.541195 (0.112226) | 1.703784 / 1.468490 (0.235294) | 0.698261 / 4.584777 (-3.886516) | 3.357240 / 3.745712 (-0.388472) | 3.025790 / 5.269862 (-2.244072) | 1.637191 / 4.565676 (-2.928485) | 0.085620 / 0.424275 (-0.338655) | 0.012454 / 0.007607 (0.004846) | 0.524708 / 0.226044 (0.298663) | 5.269234 / 2.268929 (3.000306) | 2.290612 / 55.444624 (-53.154012) | 1.936107 / 6.876477 (-4.940370) | 1.968216 / 2.142072 (-0.173856) | 0.810438 / 4.805227 (-3.994789) | 0.154133 / 6.500664 (-6.346531) | 0.064978 / 0.075469 (-0.010491) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.231782 / 1.841788 (-0.610006) | 13.545573 / 8.074308 (5.471264) | 14.558765 / 10.191392 (4.367373) | 0.140763 / 0.680424 (-0.539661) | 0.029259 / 0.534201 (-0.504942) | 0.407776 / 0.579283 (-0.171507) | 0.410244 / 0.434364 (-0.024120) | 0.477313 / 0.540337 (-0.063024) | 0.551465 / 1.386936 (-0.835471) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006272 / 0.011353 (-0.005081) | 0.004397 / 0.011008 (-0.006611) | 0.077496 / 0.038508 (0.038988) | 0.026946 / 0.023109 (0.003837) | 0.342992 / 0.275898 (0.067094) | 0.374407 / 0.323480 (0.050927) | 0.004849 / 0.007986 (-0.003136) | 0.004549 / 0.004328 (0.000220) | 0.076439 / 0.004250 (0.072189) | 0.035829 / 0.037052 (-0.001224) | 0.343483 / 0.258489 (0.084994) | 0.385581 / 0.293841 (0.091740) | 0.031745 / 0.128546 (-0.096801) | 0.011617 / 0.075646 (-0.064030) | 0.087207 / 0.419271 (-0.332064) | 0.042252 / 0.043533 (-0.001281) | 0.343223 / 0.255139 (0.088084) | 0.368707 / 0.283200 (0.085508) | 0.093259 / 0.141683 (-0.048424) | 1.506904 / 1.452155 (0.054750) | 1.567583 / 1.492716 (0.074867) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.158962 / 0.018006 (0.140955) | 0.395982 / 0.000490 (0.395492) | 0.003604 / 0.000200 (0.003404) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025003 / 0.037411 (-0.012408) | 0.101176 / 0.014526 (0.086650) | 0.104494 / 0.176557 (-0.072062) | 0.140414 / 0.737135 (-0.596722) | 0.108398 / 0.296338 (-0.187941) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436849 / 0.215209 (0.221640) | 4.369428 / 2.077655 (2.291774) | 2.070613 / 1.504120 (0.566493) | 1.867511 / 1.541195 (0.326317) | 1.866589 / 1.468490 (0.398099) | 0.700036 / 4.584777 (-3.884741) | 3.407513 / 3.745712 (-0.338199) | 3.022409 / 5.269862 (-2.247453) | 1.581423 / 4.565676 (-2.984253) | 0.083425 / 0.424275 (-0.340850) | 0.012380 / 0.007607 (0.004773) | 0.535087 / 0.226044 (0.309043) | 5.374814 / 2.268929 (3.105886) | 2.504841 / 55.444624 (-52.939784) | 2.166484 / 6.876477 (-4.709993) | 2.166363 / 2.142072 (0.024291) | 0.803692 / 4.805227 (-4.001535) | 0.150873 / 6.500664 (-6.349791) | 0.066253 / 0.075469 (-0.009216) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291256 / 1.841788 (-0.550532) | 13.827843 / 8.074308 (5.753535) | 13.839334 / 10.191392 (3.647942) | 0.153530 / 0.680424 (-0.526894) | 0.016896 / 0.534201 (-0.517305) | 0.379937 / 0.579283 (-0.199346) | 0.396241 / 0.434364 (-0.038123) | 0.461808 / 0.540337 (-0.078530) | 0.553023 / 1.386936 (-0.833913) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3290/comments | https://api.github.com/repos/huggingface/datasets/issues/3290/events | https://github.com/huggingface/datasets/pull/3290 | 1,056,414,856 | PR_kwDODunzps4uqzcv | 3,290 | Make several audio datasets streamable | [] | closed | false | null | 4 | 2021-11-17T17:43:41Z | 2022-02-01T21:00:52Z | 2021-11-19T15:08:57Z | null | <s>Needs https://github.com/huggingface/datasets/pull/3129 to be merged first</s>
Make those audio datasets streamable:
- [x] common_voice
- [x] openslr
- [x] vivos
- [x] librispeech_asr <s>(still has some issues to read FLAC)</s> *actually it's ok*
- [ ] <s>multilingual_librispeech (yet to be converted)</S> *TODO in a separate PR* | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3290/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3290/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3290.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3290",
"merged_at": "2021-11-19T15:08:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3290.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3290"
} | true | [
"Reading FLAC (for `librispeech_asr`) works OK for me (`soundfile` version: `0.10.3`):\r\n```python\r\nIn [2]: ds = load_dataset(\"datasets/librispeech_asr/librispeech_asr.py\", \"clean\", streaming=True, split=\"train.100\")\r\n\r\nIn [3]: item = next(iter(ds))\r\n\r\nIn [4]: item.keys()\r\nOut[4]: dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id'])\r\n\r\nIn [5]: item[\"file\"]\r\nOut[5]: '374-180298-0000.flac'\r\n\r\nIn [6]: item[\"audio\"].keys()\r\nOut[6]: dict_keys(['path', 'array', 'sampling_rate'])\r\n\r\nIn [7]: item[\"audio\"][\"sampling_rate\"]\r\nOut[7]: 16000\r\n\r\nIn [8]: item[\"audio\"][\"path\"]\r\nOut[8]: '374-180298-0000.flac'\r\n\r\nIn [9]: item[\"audio\"][\"array\"].shape\r\nOut[9]: (232480,)\r\n```",
"Oh cool ! I think this might have come from an issue with my local `soundfile` installation then",
"I'll do `multilingual_librispeech` in a separate PR since it requires the data to be in another format (in particular separate the train/dev/test splits in different files)",
"@lhoestq @albertvillanova - think it would have been nice to have added a big message at the top stating that this is a breaking change and ping `transformers` people a bit more here."
] |
https://api.github.com/repos/huggingface/datasets/issues/5453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5453/comments | https://api.github.com/repos/huggingface/datasets/issues/5453/events | https://github.com/huggingface/datasets/pull/5453 | 1,552,727,425 | PR_kwDODunzps5ITraa | 5,453 | Fix base directory while extracting insecure TAR files | [] | closed | false | null | 3 | 2023-01-23T08:57:40Z | 2023-01-24T01:34:20Z | 2023-01-23T10:10:42Z | null | This PR fixes the extraction of insecure TAR files by changing the base path against which TAR members are compared:
- from: "."
- to: `output_path`
This PR also adds tests for extracting insecure TAR files.
Related to:
- #5441
- #5452
@stas00 please note this PR addresses just one of the issues you pointed out: the use of the cwd by the extractor. The other issues (actionable error messages, raise instead of log error) should be addressed in other PRs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5453/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5453/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5453.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5453",
"merged_at": "2023-01-23T10:10:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5453.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5453"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008215 / 0.011353 (-0.003138) | 0.004510 / 0.011008 (-0.006498) | 0.099270 / 0.038508 (0.060761) | 0.028682 / 0.023109 (0.005573) | 0.332726 / 0.275898 (0.056827) | 0.371025 / 0.323480 (0.047545) | 0.006665 / 0.007986 (-0.001320) | 0.003329 / 0.004328 (-0.001000) | 0.078509 / 0.004250 (0.074259) | 0.032388 / 0.037052 (-0.004664) | 0.348540 / 0.258489 (0.090051) | 0.382212 / 0.293841 (0.088371) | 0.033307 / 0.128546 (-0.095239) | 0.011642 / 0.075646 (-0.064004) | 0.322573 / 0.419271 (-0.096699) | 0.041297 / 0.043533 (-0.002236) | 0.322710 / 0.255139 (0.067571) | 0.361593 / 0.283200 (0.078394) | 0.082276 / 0.141683 (-0.059407) | 1.481932 / 1.452155 (0.029777) | 1.531677 / 1.492716 (0.038961) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194964 / 0.018006 (0.176958) | 0.406002 / 0.000490 (0.405512) | 0.001015 / 0.000200 (0.000815) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023317 / 0.037411 (-0.014095) | 0.097231 / 0.014526 (0.082705) | 0.103898 / 0.176557 (-0.072659) | 0.139864 / 0.737135 (-0.597271) | 0.106785 / 0.296338 (-0.189554) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419036 / 0.215209 (0.203827) | 4.193985 / 2.077655 (2.116330) | 1.879069 / 1.504120 (0.374949) | 1.675384 / 1.541195 (0.134190) | 1.696225 / 1.468490 (0.227735) | 0.695257 / 4.584777 (-3.889520) | 3.437971 / 3.745712 (-0.307741) | 2.656037 / 5.269862 (-2.613824) | 1.463320 / 4.565676 (-3.102356) | 0.082575 / 0.424275 (-0.341700) | 0.012593 / 0.007607 (0.004986) | 0.526643 / 0.226044 (0.300599) | 5.278366 / 2.268929 (3.009437) | 2.288106 / 55.444624 (-53.156518) | 1.954875 / 6.876477 (-4.921602) | 1.950641 / 2.142072 (-0.191431) | 0.808289 / 4.805227 (-3.996938) | 0.148790 / 6.500664 (-6.351875) | 0.064775 / 0.075469 (-0.010694) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215219 / 1.841788 (-0.626569) | 13.551467 / 8.074308 (5.477159) | 13.841547 / 10.191392 (3.650155) | 0.153610 / 0.680424 (-0.526814) | 0.028308 / 0.534201 (-0.505893) | 0.397087 / 0.579283 (-0.182196) | 0.401724 / 0.434364 (-0.032640) | 0.458042 / 0.540337 (-0.082296) | 0.544955 / 1.386936 (-0.841981) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006321 / 0.011353 (-0.005032) | 0.004336 / 0.011008 (-0.006673) | 0.097196 / 0.038508 (0.058688) | 0.026933 / 0.023109 (0.003824) | 0.416520 / 0.275898 (0.140622) | 0.450703 / 0.323480 (0.127223) | 0.004831 / 0.007986 (-0.003155) | 0.003252 / 0.004328 (-0.001076) | 0.074981 / 0.004250 (0.070730) | 0.036136 / 0.037052 (-0.000917) | 0.423166 / 0.258489 (0.164677) | 0.460936 / 0.293841 (0.167095) | 0.031859 / 0.128546 (-0.096687) | 0.011500 / 0.075646 (-0.064146) | 0.318197 / 0.419271 (-0.101074) | 0.041472 / 0.043533 (-0.002061) | 0.419227 / 0.255139 (0.164088) | 0.444712 / 0.283200 (0.161512) | 0.088841 / 0.141683 (-0.052841) | 1.497237 / 1.452155 (0.045083) | 1.572111 / 1.492716 (0.079395) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.239261 / 0.018006 (0.221255) | 0.400358 / 0.000490 (0.399868) | 0.003460 / 0.000200 (0.003261) | 0.000078 / 0.000054 (0.000024) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024016 / 0.037411 (-0.013395) | 0.098414 / 0.014526 (0.083888) | 0.107220 / 0.176557 (-0.069337) | 0.143538 / 0.737135 (-0.593598) | 0.108607 / 0.296338 (-0.187731) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.473896 / 0.215209 (0.258687) | 4.740386 / 2.077655 (2.662731) | 2.458046 / 1.504120 (0.953926) | 2.260895 / 1.541195 (0.719700) | 2.280218 / 1.468490 (0.811728) | 0.694843 / 4.584777 (-3.889934) | 3.349795 / 3.745712 (-0.395917) | 1.846970 / 5.269862 (-3.422892) | 1.151481 / 4.565676 (-3.414195) | 0.082054 / 0.424275 (-0.342221) | 0.012664 / 0.007607 (0.005057) | 0.573400 / 0.226044 (0.347355) | 5.750648 / 2.268929 (3.481720) | 2.904257 / 55.444624 (-52.540367) | 2.555181 / 6.876477 (-4.321295) | 2.595830 / 2.142072 (0.453758) | 0.799580 / 4.805227 (-4.005647) | 0.151088 / 6.500664 (-6.349576) | 0.066639 / 0.075469 (-0.008831) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.251413 / 1.841788 (-0.590375) | 13.743368 / 8.074308 (5.669060) | 13.808729 / 10.191392 (3.617337) | 0.144765 / 0.680424 (-0.535659) | 0.016606 / 0.534201 (-0.517594) | 0.376503 / 0.579283 (-0.202780) | 0.381510 / 0.434364 (-0.052854) | 0.440295 / 0.540337 (-0.100043) | 0.524248 / 1.386936 (-0.862688) |\n\n</details>\n</details>\n\n\n",
"Thanks a lot, @albertvillanova - I validated that your fix solves the original problem!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2231/comments | https://api.github.com/repos/huggingface/datasets/issues/2231/events | https://github.com/huggingface/datasets/pull/2231 | 859,850,488 | MDExOlB1bGxSZXF1ZXN0NjE2ODYyNTEx | 2,231 | Fix map when removing columns on a formatted dataset | [] | closed | false | null | 0 | 2021-04-16T14:08:55Z | 2021-04-16T15:10:05Z | 2021-04-16T15:10:04Z | null | This should fix issue #2226
The `remove_columns` argument was ignored on formatted datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2231/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2231/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2231.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2231",
"merged_at": "2021-04-16T15:10:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2231.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2231"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4953/comments | https://api.github.com/repos/huggingface/datasets/issues/4953/events | https://github.com/huggingface/datasets/issues/4953 | 1,366,356,514 | I_kwDODunzps5RcPIi | 4,953 | CI test of TensorFlow is failing | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-09-08T13:39:29Z | 2022-09-08T15:14:45Z | 2022-09-08T15:14:45Z | null | ## Describe the bug
The following CI test fails: https://github.com/huggingface/datasets/runs/8246722693?check_suite_focus=true
```
FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError:
```
Details:
```
_________________________ TempSeedTest.test_tensorflow _________________________
[gw0] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python
self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow>
@require_tf
def test_tensorflow(self):
import tensorflow as tf
from tensorflow.keras import layers
def gen_random_output():
model = layers.Dense(2)
x = tf.random.uniform((1, 3))
return model(x).numpy()
with temp_seed(42, set_tensorflow=True):
out1 = gen_random_output()
with temp_seed(42, set_tensorflow=True):
out2 = gen_random_output()
out3 = gen_random_output()
> np.testing.assert_equal(out1, out2)
E AssertionError:
E Arrays are not equal
E
E Mismatched elements: 2 / 2 (100%)
E Max absolute difference: 0.84619296
E Max relative difference: 16.083529
E x: array([[-0.793581, 0.333286]], dtype=float32)
E y: array([[0.052612, 0.539708]], dtype=float32)
tests/test_py_utils.py:149: AssertionError
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4953/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4953/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3699/comments | https://api.github.com/repos/huggingface/datasets/issues/3699/events | https://github.com/huggingface/datasets/pull/3699 | 1,130,200,593 | PR_kwDODunzps4yY49I | 3,699 | Add dev-only config to Natural Questions dataset | [] | closed | false | null | 2 | 2022-02-10T14:42:24Z | 2022-02-11T09:50:22Z | 2022-02-11T09:50:21Z | null | As suggested by @lhoestq and @thomwolf, a new config has been added to Natural Questions dataset, so that only dev split can be downloaded.
Fix #413. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3699/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3699/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3699.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3699",
"merged_at": "2022-02-11T09:50:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3699.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3699"
} | true | [
"Great thanks ! I think we can fix the CI by copying the NQ folder on gcs to 0.0.3. Does that sound good ?",
"I've copied the 0.0.2 folder content to 0.0.3, as suggested.\r\n\r\nI'm updating the dataset card..."
] |
https://api.github.com/repos/huggingface/datasets/issues/5097 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5097/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5097/comments | https://api.github.com/repos/huggingface/datasets/issues/5097/events | https://github.com/huggingface/datasets/issues/5097 | 1,403,679,353 | I_kwDODunzps5TqnJ5 | 5,097 | Fatal error with pyarrow/libarrow.so | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-10-10T20:29:04Z | 2022-10-11T06:56:01Z | 2022-10-11T06:56:00Z | null | ## Describe the bug
When using datasets, at the very end of my jobs the program crashes (see trace below).
It doesn't seem to affect anything, as it appears to happen as the program is closing down. Just importing `datasets` is enough to cause the error.
## Steps to reproduce the bug
This is sufficient to reproduce the problem:
```bash
python -c "import datasets"
```
## Expected results
Program should run to completion without an error.
## Actual results
```bash
Fatal error condition occurred in /opt/vcpkg/buildtrees/aws-c-io/src/9e6648842a-364b708815.clean/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS
Exiting Application
################################################################################
Stack trace:
################################################################################
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200af06) [0x150dff547f06]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x20028e5) [0x150dff53f8e5]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1f27e09) [0x150dff464e09]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x150dff548a3d]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1f25948) [0x150dff462948]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x200ba3d) [0x150dff548a3d]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x1ee0b46) [0x150dff41db46]
/u/user/miniconda3/envs/env/lib/python3.10/site-packages/pyarrow/libarrow.so.900(+0x194546a) [0x150dfee8246a]
/lib64/libc.so.6(+0x39b0c) [0x150e15eadb0c]
/lib64/libc.so.6(on_exit+0) [0x150e15eadc40]
/u/user/miniconda3/envs/env/bin/python(+0x28db18) [0x560ae370eb18]
/u/user/miniconda3/envs/env/bin/python(+0x28db4b) [0x560ae370eb4b]
/u/user/miniconda3/envs/env/bin/python(+0x28db90) [0x560ae370eb90]
/u/user/miniconda3/envs/env/bin/python(_PyRun_SimpleFileObject+0x1e6) [0x560ae37123e6]
/u/user/miniconda3/envs/env/bin/python(_PyRun_AnyFileObject+0x44) [0x560ae37124c4]
/u/user/miniconda3/envs/env/bin/python(Py_RunMain+0x35d) [0x560ae37135bd]
/u/user/miniconda3/envs/env/bin/python(Py_BytesMain+0x39) [0x560ae37137d9]
/lib64/libc.so.6(__libc_start_main+0xf3) [0x150e15e97493]
/u/user/miniconda3/envs/env/bin/python(+0x2125d4) [0x560ae36935d4]
Aborted (core dumped)
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.28
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5097/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5097/timeline | null | completed | null | null | false | [
"Thanks for reporting, @catalys1.\r\n\r\nThis seems a duplicate of:\r\n- #3310 \r\n\r\nThe source of the problem is in PyArrow:\r\n- [ARROW-15141: [C++] Fatal error condition occurred in aws_thread_launch](https://issues.apache.org/jira/browse/ARROW-15141)\r\n- [ARROW-17501: [C++] Fatal error condition occurred in aws_thread_launch](https://issues.apache.org/jira/browse/ARROW-17501)\r\n\r\nThe bug in their dependency is still unresolved:\r\n- https://github.com/aws/aws-sdk-cpp/issues/1809\r\n\r\nApparently, the `aws-sdk-cpp` PyArrow dependency needs to be pinned at version `1.8.186` if using conda. Have you updated it after installing PyArrow?\r\n```shell\r\nconda list aws-sdk-cpp\r\n```\r\n\r\nMaybe you should try to downgrade it to that version:\r\n```shell\r\nconda install -c conda-forge aws-sdk-cpp=1.8.186\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/5011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5011/comments | https://api.github.com/repos/huggingface/datasets/issues/5011/events | https://github.com/huggingface/datasets/issues/5011 | 1,382,609,587 | I_kwDODunzps5SaPKz | 5,011 | Audio: `encode_example` fails with IndexError | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-09-22T15:07:27Z | 2022-09-23T09:05:18Z | 2022-09-23T09:05:18Z | null | ## Describe the bug
Loading the dataset [earnings-22](https://huggingface.co/datasets/sanchit-gandhi/earnings22_split) from the Hub yields an Index Error. I created this dataset locally and then pushed to hub at the specified URL. Thus, I expect the dataset should work out-of-the-box! Indeed, the dataset viewer functions correctly, and there were no issues when I had the dataset locally.
Don't think it's a sound file bug as the version matches what worked previously.
Update: the bug appeared for me on a GPU, mysteriously on a TPU I can't repro and it downloads correctly...
## Steps to reproduce the bug
```python
from datasets import load_dataset
earnings22 = load_dataset("sanchit-gandhi/earnings22_split")
```
## Expected results
```
>>> earnings22
DatasetDict({
validation: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 2650
})
train: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 52006
})
test: Dataset({
features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'],
num_rows: 2735
})
})
```
## Actual results
```
Traceback (most recent call last):
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2764, in _map_single
writer.write(example)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 451, in write
self.write_examples_on_file()
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 409, in write_examples_on_file
self.write_batch(batch_examples=batch_examples)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 508, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 231, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 197, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1683, in wrapper
return func(array, *args, **kwargs)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1795, in cast_array_to_feature
return feature.cast_storage(array)
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in cast_storage
storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in <listcomp>
storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()])
File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 92, in encode_example
sf.write(buffer, value["array"], value["sampling_rate"], format="wav")
File "/opt/conda/envs/hf/lib/python3.8/site-packages/soundfile.py", line 313, in write
channels = data.shape[1]
IndexError: tuple index out of range
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
Plus:
- SoundFile version: 0.10.3.post1
cc @lhoestq @polinaeterna | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5011/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5011/timeline | null | completed | null | null | false | [
"Sorry bug on my part ๐
Closing "
] |
https://api.github.com/repos/huggingface/datasets/issues/4544 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4544/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4544/comments | https://api.github.com/repos/huggingface/datasets/issues/4544/events | https://github.com/huggingface/datasets/issues/4544 | 1,280,500,340 | I_kwDODunzps5MUuJ0 | 4,544 | [CI] seqeval installation fails sometimes on python 3.6 | [] | closed | false | null | 0 | 2022-06-22T16:35:23Z | 2022-06-23T10:13:44Z | 2022-06-23T10:13:44Z | null | The CI sometimes fails to install seqeval, which cause the `seqeval` metric tests to fail.
The installation fails because of this error:
```
Collecting seqeval
Downloading seqeval-1.2.2.tar.gz (43 kB)
|โโโโโโโโ | 10 kB 42.1 MB/s eta 0:00:01
|โโโโโโโโโโโโโโโ | 20 kB 53.3 MB/s eta 0:00:01
|โโโโโโโโโโโโโโโโโโโโโโโ | 30 kB 67.2 MB/s eta 0:00:01
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 40 kB 76.1 MB/s eta 0:00:01
|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 43 kB 10.0 MB/s
Preparing metadata (setup.py) ... - error
ERROR: Command errored out with exit status 1:
command: /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pf54_vqy
cwd: /tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/
Complete output (22 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py", line 56, in <module>
'Programming Language :: Python :: Implementation :: PyPy'
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/__init__.py", line 143, in setup
return distutils.core.setup(**attrs)
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/core.py", line 108, in setup
_setup_distribution = dist = klass(attrs)
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 442, in __init__
k: v for k, v in attrs.items()
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/dist.py", line 281, in __init__
self.finalize_options()
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 601, in finalize_options
ep.load()(self, ep.name, value)
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2346, in load
return self.resolve()
File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2352, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/.eggs/setuptools_scm-7.0.2-py3.6.egg/setuptools_scm/__init__.py", line 5
from __future__ import annotations
^
SyntaxError: future feature annotations is not defined
----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/9d/2d/233c79d5b4e5ab1dbf111242299153f3caddddbb691219f363ad55ce783d/seqeval-1.2.2.tar.gz#sha256=f28e97c3ab96d6fcd32b648f6438ff2e09cfba87f05939da9b3970713ec56e6f (from https://pypi.org/simple/seqeval/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
```
for example in https://app.circleci.com/pipelines/github/huggingface/datasets/12665/workflows/93878eb9-a923-4b35-b2e7-c5e9b22f10ad/jobs/75300
Here is a diff of the pip install logs until the error is reached: https://www.diffchecker.com/VkQDLeQT
This could be caused by the latest updates of setuptools-scm | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4544/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4544/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4193/comments | https://api.github.com/repos/huggingface/datasets/issues/4193/events | https://github.com/huggingface/datasets/pull/4193 | 1,210,734,701 | PR_kwDODunzps42izQG | 4,193 | Document save_to_disk and push_to_hub on images and audio files | [] | closed | false | null | 2 | 2022-04-21T09:04:36Z | 2022-04-22T09:55:55Z | 2022-04-22T09:49:31Z | null | Following https://github.com/huggingface/datasets/pull/4187, I explained in the documentation of `save_to_disk` and `push_to_hub` how they handle image and audio data. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4193/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4193/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4193.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4193",
"merged_at": "2022-04-22T09:49:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4193.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4193"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Good catch, I updated the docstrings"
] |
https://api.github.com/repos/huggingface/datasets/issues/366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/366/comments | https://api.github.com/repos/huggingface/datasets/issues/366/events | https://github.com/huggingface/datasets/pull/366 | 653,954,896 | MDExOlB1bGxSZXF1ZXN0NDQ2NzcyODE2 | 366 | Add quora dataset | [] | closed | false | null | 2 | 2020-07-09T10:34:22Z | 2020-07-13T17:35:21Z | 2020-07-13T17:35:21Z | null | Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs).
Implementation Notes:
- I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test split but I can't find an easy way to download it.
- I've made the questions into a list:
```python
{
"questions": [
{"id":0, "text": "Is this an example question?"},
{"id":1, "text": "Is this a sample question?"},
],
...
}
```
rather than:
```python
{
"question1": "Is this an example question?",
"question2": "Is this a sample question?"
"qid0": 0
"qid1": 1
...
}
```
Not sure if this was the right call.
- Can't find a good citation for this dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/366/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/366.diff",
"html_url": "https://github.com/huggingface/datasets/pull/366",
"merged_at": "2020-07-13T17:35:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/366.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/366"
} | true | [
"Tests seem to be failing because of pandas",
"Kaggle needs authentification to download datasets. We don't have a way to handle that in the lib for now"
] |
https://api.github.com/repos/huggingface/datasets/issues/1023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1023/comments | https://api.github.com/repos/huggingface/datasets/issues/1023/events | https://github.com/huggingface/datasets/pull/1023 | 755,655,752 | MDExOlB1bGxSZXF1ZXN0NTMxMzI3MTMy | 1,023 | Add Schema Guided Dialogue dataset | [] | closed | false | null | 0 | 2020-12-02T22:26:01Z | 2020-12-03T01:18:01Z | 2020-12-03T01:18:01Z | null | This PR adds the Schema Guided Dialogue dataset created for the DSTC8 challenge
- https://github.com/google-research-datasets/dstc8-schema-guided-dialogue
A bit simpler than MultiWOZ, the only tricky thing is the sequence of dictionaries that had to be linearized. There is a config for the data proper, and a config for the schemas. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1023/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1023/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1023.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1023",
"merged_at": "2020-12-03T01:18:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1023.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1023"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5031/comments | https://api.github.com/repos/huggingface/datasets/issues/5031/events | https://github.com/huggingface/datasets/pull/5031 | 1,388,201,146 | PR_kwDODunzps4_t82_ | 5,031 | Support hfh 0.10 implicit auth | [] | closed | false | null | 4 | 2022-09-27T18:37:49Z | 2022-09-30T09:18:24Z | 2022-09-30T09:15:59Z | null | In huggingface-hub 0.10 the `token` parameter is deprecated for dataset_info and list_repo_files in favor of use_auth_token.
Moreover if use_auth_token=None then the user's token is used implicitly.
I took those two changes into account
Close https://github.com/huggingface/datasets/issues/4990
TODO:
- [x] fix tests
We should wait hfh 0.10 to be relased first to make sure it works correctly before merging | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5031/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5031/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5031.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5031",
"merged_at": "2022-09-30T09:15:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5031.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5031"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq it is now released so you can move forward with it :) ",
"I took your comments into account @Wauplin :)\r\nI also bumped the requirement to 0.2.0 because we're using `set_access_token`\r\n\r\ncc @albertvillanova WDYT ? I edited the CI job to also check for our minimum supported version of hfh at the same time as the minimum pyarrow version",
"@lhoestq great, thanks ! :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4624/comments | https://api.github.com/repos/huggingface/datasets/issues/4624/events | https://github.com/huggingface/datasets/pull/4624 | 1,293,085,058 | PR_kwDODunzps46yzOK | 4,624 | Remove all paperswithcode_id: null | [] | closed | false | null | 3 | 2022-07-04T12:11:32Z | 2022-07-04T13:22:00Z | 2022-07-04T13:10:38Z | null | On the Hub there is a validation error on the `paperswithcode_id` tag when the value is `null`:
<img width="686" alt="image" src="https://user-images.githubusercontent.com/42851186/177151825-93d341c5-25bd-41ab-96c2-c0b516d51c68.png">
We've been using `null` to specify that we checked on pwc but the dataset doesn't exist there.
To have the validation working again we can simply remove all the `paperswithcode_id: null`.
cc @julien-c | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4624/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4624.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4624",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4624.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4624"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> We've been using `null` to specify that we checked on pwc but the dataset doesn't exist there.\r\n\r\n@lhoestq maybe it's better to accept it on the Hub side then? Let me know if you want us to do it Hub-side",
"Yup it's maybe better to support it on the Hub side then indeed, thanks ! Closing this one"
] |
https://api.github.com/repos/huggingface/datasets/issues/61 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/61/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/61/comments | https://api.github.com/repos/huggingface/datasets/issues/61/events | https://github.com/huggingface/datasets/pull/61 | 614,607,474 | MDExOlB1bGxSZXF1ZXN0NDE1MTI3MTU4 | 61 | [Load] rename setup_module to prepare_module | [] | closed | false | null | 0 | 2020-05-08T08:54:22Z | 2020-05-08T08:56:32Z | 2020-05-08T08:56:16Z | null | rename setup_module to prepare_module due to issues with pytests `setup_module` function.
See: PR #59. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/61/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/61/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/61.diff",
"html_url": "https://github.com/huggingface/datasets/pull/61",
"merged_at": "2020-05-08T08:56:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/61.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/61"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2287/comments | https://api.github.com/repos/huggingface/datasets/issues/2287/events | https://github.com/huggingface/datasets/pull/2287 | 871,063,374 | MDExOlB1bGxSZXF1ZXN0NjI2MTQ0MTQ3 | 2,287 | Avoid copying table's record batches | [] | closed | false | null | 1 | 2021-04-29T14:15:01Z | 2021-04-29T16:34:23Z | 2021-04-29T16:34:22Z | null | Fixes #2276 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2287/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2287/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2287.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2287",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2287.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2287"
} | true | [
"Thanks for fixing it. I actually included a similar fix in #2291 along with some updates in tests\r\nI'm closing this one in favor of #2291 if you don't mind.\r\n\r\nThanks again !"
] |
https://api.github.com/repos/huggingface/datasets/issues/3948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3948/comments | https://api.github.com/repos/huggingface/datasets/issues/3948/events | https://github.com/huggingface/datasets/pull/3948 | 1,171,460,560 | PR_kwDODunzps40jg1F | 3,948 | Google BLEU Metric Card | [] | closed | false | null | 1 | 2022-03-16T19:27:17Z | 2022-03-21T16:04:26Z | 2022-03-21T16:04:25Z | null | Add metric card for Google BLEU (GLEU) metric
One thing I noticed while writing this up is that, while this metric was made specifically to be better than BLEU at the sentence level instead of the corpus level, the current implementation only allows the calculation of the corpus-level statistic. I think changing this would be a good thing to put on the to do list for the future. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3948/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3948/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3948.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3948",
"merged_at": "2022-03-21T16:04:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3948.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3948"
} | true | [
"A few things that aren't clear for me:\r\n- \"Because it performs better on individual sentence pairs as compared to BLEU, Google BLEU has also been used in RL experiments.\" -- why is this the case? why would that make it more usable for RL? (also, you should put \"Reinforcement Learning\" explicitly, not just the acronym)\r\n- (Minor issue) -- I put inputs before the first example code, I think that's clearer somehow\r\n\r\nOtherwise, it looks great, good job @emibaylor !\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/277/comments | https://api.github.com/repos/huggingface/datasets/issues/277/events | https://github.com/huggingface/datasets/issues/277 | 640,163,053 | MDU6SXNzdWU2NDAxNjMwNTM= | 277 | Empty samples in glue/qqp | [] | closed | false | null | 2 | 2020-06-17T05:54:52Z | 2020-06-21T00:21:45Z | 2020-06-21T00:21:45Z | null | ```
qqp = nlp.load_dataset('glue', 'qqp')
print(qqp['train'][310121])
print(qqp['train'][362225])
```
```
{'question1': 'How can I create an Android app?', 'question2': '', 'label': 0, 'idx': 310137}
{'question1': 'How can I develop android app?', 'question2': '', 'label': 0, 'idx': 362246}
```
Notice that question 2 is empty string.
BTW, I have checked and these two are the only naughty ones in all splits of qqp. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/277/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/277/timeline | null | completed | null | null | false | [
"We are only wrapping the original dataset.\r\n\r\nMaybe try to ask on the GLUE mailing list or reach out to the original authors?",
"Tanks for the suggestion, I'll try to ask GLUE benchmark.\r\nI'll first close the issue, post the following up here afterwards, and reopen the issue if needed. "
] |
https://api.github.com/repos/huggingface/datasets/issues/4903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4903/comments | https://api.github.com/repos/huggingface/datasets/issues/4903/events | https://github.com/huggingface/datasets/pull/4903 | 1,352,539,075 | PR_kwDODunzps494aud | 4,903 | Fix CI reporting | [] | closed | false | null | 1 | 2022-08-26T17:16:30Z | 2022-08-26T17:49:33Z | 2022-08-26T17:46:59Z | null | Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary.
This PR fixes a regression introduced by:
- #4845
This introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4903/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4903/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4903.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4903",
"merged_at": "2022-08-26T17:46:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4903.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4903"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1348/comments | https://api.github.com/repos/huggingface/datasets/issues/1348/events | https://github.com/huggingface/datasets/pull/1348 | 759,869,849 | MDExOlB1bGxSZXF1ZXN0NTM0Nzk3Nzcy | 1,348 | add Yoruba NER dataset | [] | closed | false | null | 4 | 2020-12-08T23:42:35Z | 2020-12-10T14:30:25Z | 2020-12-10T14:09:43Z | null | Added Yoruba GV dataset based on this paper | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1348/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1348/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1348.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1348",
"merged_at": "2020-12-10T14:09:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1348.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1348"
} | true | [
"Thank you. Okay, other pull requests only have one dataset",
"The `RemoteDatasetTest` error in the CI is just a connection error, we can ignore it",
"merging since the CI is fixed on master",
"Thank you very much"
] |
https://api.github.com/repos/huggingface/datasets/issues/4059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4059/comments | https://api.github.com/repos/huggingface/datasets/issues/4059/events | https://github.com/huggingface/datasets/pull/4059 | 1,186,149,949 | PR_kwDODunzps41TC-o | 4,059 | Load GitHub datasets from Hub | [] | closed | false | null | 10 | 2022-03-30T09:21:56Z | 2022-09-16T12:43:26Z | 2022-09-16T12:40:43Z | null | We have recurrently had connection errors when requesting GitHub because sometimes the site is not available.
This PR requests the Hub instead, once all GitHub datasets are mirrored on the Hub.
Fix #2048
Related to:
- #4051
- #3210
- #2787
- #2075
- #2036 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4059/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4059/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4059.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4059",
"merged_at": "2022-09-16T12:40:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4059.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4059"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Currently the github datasets versioning is synced with the `datasets` lib versioning: when you load a github dataset using `datasets==x.y.z`, then the version of the dataset will be the one at the git tag `x.y.z`. This is for reproducibility reasons.\r\n\r\nWe could stop having this behavior and always use the latest version of the dataset, but when we do a breaking change it will break github datasets for previous versions of the library. It could be nice to think about tools that will allow backward compatibility if we ever need to to a breaking change in some datasets. Maybe a way to specify which revision of the dataset to use based on the `datasets` major version.\r\n\r\nIf we keep this behavior, then maybe add a note in setup.py to push to PyPI only after the `Update Hub repositories` CI job is done. It can take a few minutes to add the version tag to all the dataset repositories on the Hub. If we push to PyPI before the tags are pushed, then some users might get some 404 if at the same time they installed `datasets` and run `load_dataset`.",
"@lhoestq I was going to increase the `max_retries` as done for metrics:\r\n- #4063 \r\n\r\nBut then I realized that loading from the Hub would work as well. That is why I opened this PR.\r\n\r\nDefinitely, we should decide which behavior we want:\r\n- We have been working in the direction of eliminating the distinctions between canonical/community datasets\r\n- If we continue to go in that direction, then passing (or not passing) `revision` should have the same behavior for canonical/community\r\n- If we want to continue to tight the library version with the canonical datasets version, that is definitely a difference between canonical and community datasets\r\n\r\nNot sure what could be better in the long term...",
"> We could stop having this behavior and always use the latest version of the dataset, but when we do a breaking change it will break github datasets for previous versions of the library. \r\n\r\nNot sure of understanding this. Previous versions of the `datasets` library will continue to download GitHub datasets from GitHub, syncing library/dataset versions... Where is the problem?",
"Yes you're right, previous versions of `datasets` will still continue to download from github, but not future versions.\r\nIf we release `datasets` 2.1 by removing this behavior and if one day we release `datasets` 3.0 with a breaking change in the dataset scripts, then all version >=2.1 will break.",
"Ideally we should drop the differences between github datasets and community datasets, and maybe provide a way to fallback on an older version of a dataset repository if the user's `datasets` version is too old and incompatible with it.",
"I just noticed I literally opened the same PR lol\r\n\r\nI'm still convinced that we should do a better version compatibility check but we can see that later IMO",
"Normally in open source projects, when there is a duplicate PR, the latter is tagged as \"duplicate\" and closed. :stuck_out_tongue_winking_eye: \r\n\r\nLet me make things clear in my mind: so you say that the blocking point that was preventing this PR from merging, now is no longer a blocking point and could be addresses in a subsequent PR?",
"Let me close the duplicate one, sorry\r\n\r\n> Let me make things clear my mind: so you say that the blocking point that was preventing this PR from merging now is no longer a blocking point and could be addresses in a subsequent PR?\r\n\r\nYes ๐",
"> Note that after this PR, all the changes made to a dataset will affect all the datasets version from now on\r\n\r\nYes, we have aligned this behavior with Hub datasets, as this is already the case for Hub datasets."
] |
https://api.github.com/repos/huggingface/datasets/issues/3590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3590/comments | https://api.github.com/repos/huggingface/datasets/issues/3590/events | https://github.com/huggingface/datasets/pull/3590 | 1,106,784,860 | PR_kwDODunzps4xMlGg | 3,590 | Update ANLI README.md | [] | closed | false | null | 0 | 2022-01-18T11:22:53Z | 2022-01-20T16:58:41Z | 2022-01-20T16:58:41Z | null | Update license and little things concerning ANLI | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3590/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3590/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3590.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3590",
"merged_at": "2022-01-20T16:58:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3590.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3590"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5386/comments | https://api.github.com/repos/huggingface/datasets/issues/5386/events | https://github.com/huggingface/datasets/issues/5386 | 1,508,592,918 | I_kwDODunzps5Z600W | 5,386 | `max_shard_size` in `datasets.push_to_hub()` breaks with large files | [] | closed | false | null | 2 | 2022-12-22T21:50:58Z | 2022-12-26T23:45:51Z | 2022-12-26T23:45:51Z | null | ### Describe the bug
`max_shard_size` parameter for `datasets.push_to_hub()` works unreliably with large files, generating shard files that are way past the specified limit.
In my private dataset, which contains unprocessed images of all sizes (up to `~100MB` per file), I've encountered cases where `max_shard_size='100MB'` results in shard files that are `>2GB` in size. Setting `max_shard_size` to another value, such as `1GB` or `500MB` does not fix this problem.
**The real problem is this:** When the shard file size grows too big, the entire dataset breaks because of #4721 and ultimately https://issues.apache.org/jira/browse/ARROW-5030. Since `max_shard_size` does not let one accurately control the size of the shard files, it becomes very easy to build a large dataset without any warnings that it will be broken -- even when you think you are mitigating this problem by setting `max_shard_size`.
```
File " /path/to/sd-test-suite-v1/venv/lib/site-packages/datasets/builder.py", line 1763, in _prepare_split_single
for _, table in generator:
File " /path/to/sd-test-suite-v1/venv/lib/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables
for batch_idx, record_batch in enumerate(
File "pyarrow/_parquet.pyx", line 1323, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
```
### Steps to reproduce the bug
1. Clone [example repo](https://github.com/salieri/hf-dataset-shard-size-bug)
2. Follow steps in [README.md](https://github.com/salieri/hf-dataset-shard-size-bug/blob/main/README.md)
3. After uploading the dataset, you will see that the shard file size varies between `30MB` and `200MB` -- way beyond the `max_shard_size='75MB'` limit (example: `train-00003-of-00131...` is `155MB` in [here](https://huggingface.co/datasets/slri/shard-size-test/tree/main/data))
(Note that this example repo does not generate shard files that are so large that they would trigger #4721)
### Expected behavior
The shard file size should remain below or equal to `max_shard_size`.
### Environment info
- `datasets` version: 2.8.0
- Platform: Linux-5.10.157-139.675.amzn2.aarch64-aarch64-with-glibc2.17
- Python version: 3.7.15
- PyArrow version: 10.0.1
- Pandas version: 1.3.5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5386/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5386/timeline | null | completed | null | null | false | [
"Hi! \r\n\r\nThis behavior stems from the fact that we don't always embed image bytes in the underlying arrow table, which can lead to bad size estimation (we use the first 1000 table rows to [estimate](https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/arrow_dataset.py#L4627) the external file size). We plan to address this in the next major release by always embedding external bytes. In the meantime, you can either shuffle the dataset with `.shuffle().flatten_indices()` to make the estimation more precise or embed the bytes in the table like so:\r\n```python\r\nfrom datasets.table import embed_table_storage\r\nformat = ds.format\r\nds = ds.with_format(\"arrow\")\r\nds = ds.map(embed_table_storage, batched=True)\r\nds = ds.with_format(**format)\r\n...\r\nds.push_to_hub(...)\r\n```",
"Embedding the bytes worked like charm. Thanks @mariosasko!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2475/comments | https://api.github.com/repos/huggingface/datasets/issues/2475/events | https://github.com/huggingface/datasets/issues/2475 | 917,650,882 | MDU6SXNzdWU5MTc2NTA4ODI= | 2,475 | Issue in timit_asr database | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-06-10T18:05:29Z | 2021-06-13T08:13:50Z | 2021-06-13T08:13:13Z | null | ## Describe the bug
I am trying to load the timit_asr dataset however only the first record is shown (duplicated over all the rows).
I am using the next code line
dataset = load_dataset(โtimit_asrโ, split=โtestโ).shuffle().select(range(10))
The above code result with the same sentence duplicated ten times.
It also happens when I use the dataset viewer at Streamlit .
## Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset(โtimit_asrโ, split=โtestโ).shuffle().select(range(10))
data = dataset.to_pandas()
# Sample code to reproduce the bug
```
## Expected results
table with different row information
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1 (also occur in the latest version)
- Platform: Linux-4.15.0-143-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 1.15.3 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2475/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2475/timeline | null | completed | null | null | false | [
"This bug was fixed in #1995. Upgrading datasets to version 1.6 fixes the issue!",
"Indeed was a fixed bug.\r\nWorks on version 1.8\r\nThanks "
] |
https://api.github.com/repos/huggingface/datasets/issues/5337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5337/comments | https://api.github.com/repos/huggingface/datasets/issues/5337/events | https://github.com/huggingface/datasets/issues/5337 | 1,481,692,156 | I_kwDODunzps5YUNP8 | 5,337 | Support webdataset format | [] | open | false | null | 4 | 2022-12-07T11:32:25Z | 2023-05-26T10:34:45Z | null | null | Webdataset is an efficient format for iterable datasets. It would be nice to support it in `datasets`, as discussed in https://github.com/rom1504/img2dataset/issues/234.
In particular it would be awesome to be able to load one using `load_dataset` in streaming mode (either from a local directory, or from a dataset on the Hugging Face Hub). Some datasets on the Hub are already in webdataset format.
It terms of implementation, we can have something similar to the Parquet loader.
I also think it's fine to have webdataset as an optional dependency. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5337/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5337/timeline | null | null | null | null | false | [
"I like the idea of having `webdataset` as an optional dependency to ensure our loader generates web datasets the same way as the main project.",
"Webdataset is the one of the most popular dataset formats for large scale computer vision tasks. Upvote for this issue. ",
"Any updates on this?",
"We haven't had the bandwidth to implement it so far, but if someone wants to give it a shot please don't hesitate ^^"
] |
https://api.github.com/repos/huggingface/datasets/issues/4433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4433/comments | https://api.github.com/repos/huggingface/datasets/issues/4433/events | https://github.com/huggingface/datasets/pull/4433 | 1,255,830,758 | PR_kwDODunzps442P5L | 4,433 | Fix script fetching and local path handling in `inspect_dataset` and `inspect_metric` | [] | closed | false | null | 2 | 2022-06-01T12:09:56Z | 2022-06-09T10:34:54Z | 2022-06-09T10:26:07Z | null | Fix #4348 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4433/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4433/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4433.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4433",
"merged_at": "2022-06-09T10:26:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4433.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4433"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Added back the `[:]` and a comment to explain why this is needed. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1542/comments | https://api.github.com/repos/huggingface/datasets/issues/1542/events | https://github.com/huggingface/datasets/pull/1542 | 765,439,746 | MDExOlB1bGxSZXF1ZXN0NTM4OTYyMjAx | 1,542 | fix typo readme | [] | closed | false | null | 0 | 2020-12-13T14:41:22Z | 2020-12-13T17:16:41Z | 2020-12-13T17:16:40Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1542/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1542/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1542.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1542",
"merged_at": "2020-12-13T17:16:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1542.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1542"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/1185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1185/comments | https://api.github.com/repos/huggingface/datasets/issues/1185/events | https://github.com/huggingface/datasets/pull/1185 | 757,825,413 | MDExOlB1bGxSZXF1ZXN0NTMzMTI0NzE1 | 1,185 | Add Hate Speech Dataset in Filipino | [] | closed | false | null | 0 | 2020-12-06T02:01:56Z | 2020-12-07T15:35:33Z | 2020-12-07T15:35:33Z | null | This PR adds the Hate Speech Dataset, a text classification dataset in Filipino, consisting 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections.
Link to the paper: https://pcj.csp.org.ph/index.php/pcj/issue/download/29/PCJ%20V14%20N1%20pp1-14%202019
Link to the dataset/repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1185/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1185/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1185.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1185",
"merged_at": "2020-12-07T15:35:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1185.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1185"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1887/comments | https://api.github.com/repos/huggingface/datasets/issues/1887/events | https://github.com/huggingface/datasets/pull/1887 | 809,229,809 | MDExOlB1bGxSZXF1ZXN0NTc0MTI2NTMy | 1,887 | Implement to_csv for Dataset | [] | closed | false | null | 5 | 2021-02-16T11:27:29Z | 2021-02-19T09:41:59Z | 2021-02-19T09:41:59Z | null | cc @thomwolf
`to_csv` supports passing either a file path or a *binary* file object
The writing is batched to avoid loading the whole table in memory | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1887/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1887/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1887.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1887",
"merged_at": "2021-02-19T09:41:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1887.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1887"
} | true | [
"@lhoestq I stumbled upon an interesting failure when adding tests for CSV serialization of `ArrayXD` features (see the failing unit tests in the CI)\r\n\r\nIt's due to the fact that booleans cannot be converted from arrow format to numpy without copy: https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy",
"Good catch ! I must be able to fix that one by allowing copies for this kind of arrays.\r\nThis is the kind of surprise you get sometimes when playing with arrow x)",
"Raising this error for booleans was introduced in https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22 without much explanations unfortunately.\r\nSo \"no copy\" only works for primitive types - except booleans.\r\nThis is confirmed in the source code at https://github.com/wesm/arrow/blob/c07b9b48cf3e0bbbab493992a492ae47e5b04cad/python/pyarrow/array.pxi#L621\r\n\r\nI'm opening a PR to allow copies for booleans...",
"I just merged the fix for boolean ArrayXD, feel free to merge from master to see if it fixes the ci :)",
"@lhoestq unfirtunately, arrays of strings (or any other non-primitive type) require a copy too\r\n\r\nA list of primitive types can be found here: https://github.com/wesm/arrow/blob/c07b9b48cf3e0bbbab493992a492ae47e5b04cad/python/pyarrow/types.pxi#L821\r\n\r\npyarrow provides a `is_primitive` function to check whether a type is primitive , I used it to set `zero_copy_only`\r\n\r\nAlso, `PandasArrayExtensionArray.isna` was using `numpy.isnan` which fails for arrays of strings. I replaced it with `pandas.isna`. Let me know what you think! :) "
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.