url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3142/comments | https://api.github.com/repos/huggingface/datasets/issues/3142/events | https://github.com/huggingface/datasets/issues/3142 | 1,033,566,034 | I_kwDODunzps49mvdS | 3,142 | Provide a way to write a streamed dataset to the disk | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | open | false | null | 1 | 2021-10-22T13:09:53Z | 2021-10-29T11:14:39Z | null | null | **Is your feature request related to a problem? Please describe.**
The streaming mode allows to get the 100 first rows of a dataset very quickly. But it does not cache the answer, so a posterior call to get the same 100 rows will send a request to the server again and again.
**Describe the solution you'd like**
Provide a way to write the streamed rows of a dataset on the disk, and to load from it later.
**Describe alternatives you've considered**
Provide a third mode: `lazy`, which would use the local cache for the data that have already been fetched previously, and use streaming to get the rest of the requested data.
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3142/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3142/timeline | null | null | null | null | false | [
"Yes, I agree this feature is much needed. We could do something similar to what TF does (https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cache). \r\n\r\nIdeally, if the entire streamed dataset is consumed/cached, the generated cache should be reusable for the Arrow dataset."
] |
https://api.github.com/repos/huggingface/datasets/issues/5558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5558/comments | https://api.github.com/repos/huggingface/datasets/issues/5558/events | https://github.com/huggingface/datasets/pull/5558 | 1,593,655,815 | PR_kwDODunzps5KcF5E | 5,558 | Remove instructions for `ffmpeg` system package installation on Colab | [] | closed | false | null | 2 | 2023-02-21T15:13:36Z | 2023-03-01T13:46:04Z | 2023-02-23T13:50:27Z | null | Colab now has Ubuntu 20.04 which already has `ffmpeg` of required (>4) version. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5558/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5558/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5558.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5558",
"merged_at": "2023-02-23T13:50:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5558.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5558"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.014525 / 0.011353 (0.003172) | 0.006871 / 0.011008 (-0.004137) | 0.135577 / 0.038508 (0.097069) | 0.039620 / 0.023109 (0.016511) | 0.499829 / 0.275898 (0.223931) | 0.571000 / 0.323480 (0.247520) | 0.009726 / 0.007986 (0.001740) | 0.005654 / 0.004328 (0.001325) | 0.104732 / 0.004250 (0.100482) | 0.046849 / 0.037052 (0.009796) | 0.486667 / 0.258489 (0.228178) | 0.543611 / 0.293841 (0.249770) | 0.056414 / 0.128546 (-0.072133) | 0.019974 / 0.075646 (-0.055672) | 0.484878 / 0.419271 (0.065606) | 0.059244 / 0.043533 (0.015711) | 0.490046 / 0.255139 (0.234907) | 0.517427 / 0.283200 (0.234227) | 0.114692 / 0.141683 (-0.026991) | 1.935935 / 1.452155 (0.483780) | 1.990253 / 1.492716 (0.497537) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.271008 / 0.018006 (0.253002) | 0.610964 / 0.000490 (0.610474) | 0.013423 / 0.000200 (0.013223) | 0.000523 / 0.000054 (0.000468) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031940 / 0.037411 (-0.005472) | 0.130755 / 0.014526 (0.116229) | 0.146616 / 0.176557 (-0.029941) | 0.239386 / 0.737135 (-0.497749) | 0.146612 / 0.296338 (-0.149726) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.675383 / 0.215209 (0.460174) | 6.656828 / 2.077655 (4.579174) | 2.741231 / 1.504120 (1.237111) | 2.232921 / 1.541195 (0.691726) | 2.172116 / 1.468490 (0.703626) | 1.221623 / 4.584777 (-3.363154) | 5.683653 / 3.745712 (1.937941) | 5.344137 / 5.269862 (0.074275) | 2.969670 / 4.565676 (-1.596006) | 0.142107 / 0.424275 (-0.282168) | 0.015808 / 0.007607 (0.008201) | 0.767366 / 0.226044 (0.541321) | 8.059605 / 2.268929 (5.790676) | 3.333535 / 55.444624 (-52.111089) | 2.669619 / 6.876477 (-4.206857) | 2.652989 / 2.142072 (0.510917) | 1.526397 / 4.805227 (-3.278830) | 0.265609 / 6.500664 (-6.235055) | 0.082759 / 0.075469 (0.007290) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.631086 / 1.841788 (-0.210701) | 18.701351 / 8.074308 (10.627043) | 22.843802 / 10.191392 (12.652410) | 0.240134 / 0.680424 (-0.440290) | 0.046683 / 0.534201 (-0.487518) | 0.576488 / 0.579283 (-0.002795) | 0.650123 / 0.434364 (0.215759) | 0.661190 / 0.540337 (0.120853) | 0.759563 / 1.386936 (-0.627373) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009883 / 0.011353 (-0.001470) | 0.006692 / 0.011008 (-0.004316) | 0.098550 / 0.038508 (0.060042) | 0.035188 / 0.023109 (0.012078) | 0.463535 / 0.275898 (0.187637) | 0.472762 / 0.323480 (0.149282) | 0.007199 / 0.007986 (-0.000787) | 0.007961 / 0.004328 (0.003632) | 0.093140 / 0.004250 (0.088890) | 0.051752 / 0.037052 (0.014700) | 0.453412 / 0.258489 (0.194922) | 0.502741 / 0.293841 (0.208900) | 0.056006 / 0.128546 (-0.072540) | 0.020164 / 0.075646 (-0.055482) | 0.116828 / 0.419271 (-0.302444) | 0.067205 / 0.043533 (0.023672) | 0.442715 / 0.255139 (0.187576) | 0.472525 / 0.283200 (0.189326) | 0.122767 / 0.141683 (-0.018915) | 1.881366 / 1.452155 (0.429212) | 1.978786 / 1.492716 (0.486069) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284180 / 0.018006 (0.266174) | 0.601556 / 0.000490 (0.601067) | 0.008455 / 0.000200 (0.008255) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033515 / 0.037411 (-0.003896) | 0.136407 / 0.014526 (0.121881) | 0.143341 / 0.176557 (-0.033215) | 0.225394 / 0.737135 (-0.511741) | 0.153343 / 0.296338 (-0.142995) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.688202 / 0.215209 (0.472993) | 6.576502 / 2.077655 (4.498847) | 2.839175 / 1.504120 (1.335055) | 2.481152 / 1.541195 (0.939957) | 2.617227 / 1.468490 (1.148736) | 1.314854 / 4.584777 (-3.269922) | 5.805950 / 3.745712 (2.060238) | 3.188930 / 5.269862 (-2.080932) | 2.141719 / 4.565676 (-2.423957) | 0.145069 / 0.424275 (-0.279206) | 0.014567 / 0.007607 (0.006960) | 0.780000 / 0.226044 (0.553955) | 7.898016 / 2.268929 (5.629088) | 3.549060 / 55.444624 (-51.895564) | 2.856569 / 6.876477 (-4.019907) | 3.117719 / 2.142072 (0.975647) | 1.512560 / 4.805227 (-3.292668) | 0.262689 / 6.500664 (-6.237975) | 0.085979 / 0.075469 (0.010509) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.623550 / 1.841788 (-0.218238) | 19.597063 / 8.074308 (11.522755) | 21.293369 / 10.191392 (11.101977) | 0.263780 / 0.680424 (-0.416643) | 0.027289 / 0.534201 (-0.506912) | 0.560361 / 0.579283 (-0.018922) | 0.646288 / 0.434364 (0.211924) | 0.712699 / 0.540337 (0.172361) | 0.818332 / 1.386936 (-0.568604) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/218/comments | https://api.github.com/repos/huggingface/datasets/issues/218/events | https://github.com/huggingface/datasets/pull/218 | 627,173,407 | MDExOlB1bGxSZXF1ZXN0NDI1MDI2NzEz | 218 | Add Natual Questions and C4 scripts | [] | closed | false | null | 0 | 2020-05-29T10:40:30Z | 2020-05-29T12:31:01Z | 2020-05-29T12:31:00Z | null | Scripts are ready !
However they are not processed nor directly available from gcp yet. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/218/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/218/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/218.diff",
"html_url": "https://github.com/huggingface/datasets/pull/218",
"merged_at": "2020-05-29T12:31:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/218.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/218"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1623 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1623/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1623/comments | https://api.github.com/repos/huggingface/datasets/issues/1623/events | https://github.com/huggingface/datasets/pull/1623 | 772,950,710 | MDExOlB1bGxSZXF1ZXN0NTQ0MTI2ODQ4 | 1,623 | Add CLIMATE-FEVER dataset | [] | closed | false | null | 1 | 2020-12-22T13:34:05Z | 2020-12-22T17:53:53Z | 2020-12-22T17:53:53Z | null | As suggested by @SBrandeis , fresh PR that adds CLIMATE-FEVER. Replaces PR #1579.
---
A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present.
More information can be found at:
* Homepage: http://climatefever.ai
* Paper: https://arxiv.org/abs/2012.00614 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1623/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1623/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1623.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1623",
"merged_at": "2020-12-22T17:53:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1623.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1623"
} | true | [
"Thank you @lhoestq for your comments! 😄 I added your suggested changes, ran the tests and regenerated `dataset_infos.json` and `dummy_data`."
] |
https://api.github.com/repos/huggingface/datasets/issues/6042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6042/comments | https://api.github.com/repos/huggingface/datasets/issues/6042/events | https://github.com/huggingface/datasets/pull/6042 | 1,807,516,762 | PR_kwDODunzps5VqEyb | 6,042 | Fix unused DatasetInfosDict code in push_to_hub | [] | closed | false | null | 3 | 2023-07-17T11:03:09Z | 2023-07-18T16:17:52Z | 2023-07-18T16:08:42Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6042/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6042/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6042.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6042",
"merged_at": "2023-07-18T16:08:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6042.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6042"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008634 / 0.011353 (-0.002719) | 0.005147 / 0.011008 (-0.005861) | 0.102865 / 0.038508 (0.064357) | 0.080245 / 0.023109 (0.057136) | 0.401288 / 0.275898 (0.125390) | 0.419708 / 0.323480 (0.096228) | 0.006342 / 0.007986 (-0.001644) | 0.003998 / 0.004328 (-0.000330) | 0.078880 / 0.004250 (0.074630) | 0.068199 / 0.037052 (0.031147) | 0.389573 / 0.258489 (0.131084) | 0.417292 / 0.293841 (0.123451) | 0.048856 / 0.128546 (-0.079691) | 0.014165 / 0.075646 (-0.061481) | 0.348063 / 0.419271 (-0.071209) | 0.067547 / 0.043533 (0.024014) | 0.402251 / 0.255139 (0.147112) | 0.419478 / 0.283200 (0.136278) | 0.034846 / 0.141683 (-0.106837) | 1.773493 / 1.452155 (0.321338) | 1.930546 / 1.492716 (0.437830) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.211835 / 0.018006 (0.193829) | 0.545311 / 0.000490 (0.544821) | 0.006766 / 0.000200 (0.006566) | 0.000104 / 0.000054 (0.000050) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035406 / 0.037411 (-0.002006) | 0.100769 / 0.014526 (0.086243) | 0.108667 / 0.176557 (-0.067890) | 0.193099 / 0.737135 (-0.544036) | 0.113539 / 0.296338 (-0.182799) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.586935 / 0.215209 (0.371726) | 5.895245 / 2.077655 (3.817591) | 2.528375 / 1.504120 (1.024255) | 2.228617 / 1.541195 (0.687423) | 2.295799 / 1.468490 (0.827309) | 0.859272 / 4.584777 (-3.725505) | 5.033434 / 3.745712 (1.287722) | 7.546587 / 5.269862 (2.276726) | 4.457137 / 4.565676 (-0.108539) | 0.099626 / 0.424275 (-0.324649) | 0.009296 / 0.007607 (0.001689) | 0.713498 / 0.226044 (0.487454) | 7.409385 / 2.268929 (5.140456) | 3.361418 / 55.444624 (-52.083206) | 2.681111 / 6.876477 (-4.195366) | 2.849598 / 2.142072 (0.707526) | 1.114863 / 4.805227 (-3.690364) | 0.215494 / 6.500664 (-6.285170) | 0.075807 / 0.075469 (0.000338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.606458 / 1.841788 (-0.235330) | 23.751096 / 8.074308 (15.676788) | 21.279110 / 10.191392 (11.087718) | 0.220785 / 0.680424 (-0.459639) | 0.032688 / 0.534201 (-0.501513) | 0.530948 / 0.579283 (-0.048335) | 0.630056 / 0.434364 (0.195693) | 0.572743 / 0.540337 (0.032405) | 0.771853 / 1.386936 (-0.615083) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008693 / 0.011353 (-0.002660) | 0.004750 / 0.011008 (-0.006259) | 0.079764 / 0.038508 (0.041256) | 0.082096 / 0.023109 (0.058987) | 0.467198 / 0.275898 (0.191300) | 0.532361 / 0.323480 (0.208881) | 0.005836 / 0.007986 (-0.002149) | 0.004333 / 0.004328 (0.000005) | 0.080444 / 0.004250 (0.076194) | 0.065883 / 0.037052 (0.028831) | 0.464871 / 0.258489 (0.206382) | 0.575026 / 0.293841 (0.281185) | 0.057807 / 0.128546 (-0.070739) | 0.017462 / 0.075646 (-0.058185) | 0.093667 / 0.419271 (-0.325605) | 0.071466 / 0.043533 (0.027933) | 0.495846 / 0.255139 (0.240707) | 0.526100 / 0.283200 (0.242900) | 0.034852 / 0.141683 (-0.106831) | 1.884152 / 1.452155 (0.431998) | 1.922681 / 1.492716 (0.429965) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.250969 / 0.018006 (0.232963) | 0.504979 / 0.000490 (0.504489) | 0.000466 / 0.000200 (0.000266) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032411 / 0.037411 (-0.005000) | 0.093184 / 0.014526 (0.078658) | 0.110798 / 0.176557 (-0.065759) | 0.165741 / 0.737135 (-0.571394) | 0.111022 / 0.296338 (-0.185317) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.661284 / 0.215209 (0.446075) | 6.622388 / 2.077655 (4.544733) | 3.095705 / 1.504120 (1.591585) | 2.745698 / 1.541195 (1.204503) | 2.694103 / 1.468490 (1.225612) | 0.862154 / 4.584777 (-3.722623) | 5.109985 / 3.745712 (1.364273) | 5.040362 / 5.269862 (-0.229499) | 3.072837 / 4.565676 (-1.492840) | 0.110421 / 0.424275 (-0.313854) | 0.008476 / 0.007607 (0.000869) | 0.910020 / 0.226044 (0.683975) | 8.123626 / 2.268929 (5.854698) | 3.813811 / 55.444624 (-51.630813) | 3.017244 / 6.876477 (-3.859232) | 3.061222 / 2.142072 (0.919150) | 1.073548 / 4.805227 (-3.731680) | 0.216327 / 6.500664 (-6.284338) | 0.072977 / 0.075469 (-0.002492) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.722482 / 1.841788 (-0.119305) | 23.706716 / 8.074308 (15.632407) | 23.192134 / 10.191392 (13.000742) | 0.276733 / 0.680424 (-0.403691) | 0.033538 / 0.534201 (-0.500663) | 0.602083 / 0.579283 (0.022799) | 0.578718 / 0.434364 (0.144354) | 0.558311 / 0.540337 (0.017974) | 0.740341 / 1.386936 (-0.646595) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006862 / 0.011353 (-0.004491) | 0.004223 / 0.011008 (-0.006786) | 0.085931 / 0.038508 (0.047423) | 0.081437 / 0.023109 (0.058328) | 0.349542 / 0.275898 (0.073644) | 0.379881 / 0.323480 (0.056401) | 0.005651 / 0.007986 (-0.002334) | 0.003662 / 0.004328 (-0.000666) | 0.065251 / 0.004250 (0.061001) | 0.061599 / 0.037052 (0.024547) | 0.359681 / 0.258489 (0.101192) | 0.392502 / 0.293841 (0.098661) | 0.031300 / 0.128546 (-0.097246) | 0.008591 / 0.075646 (-0.067055) | 0.288577 / 0.419271 (-0.130694) | 0.062920 / 0.043533 (0.019388) | 0.348989 / 0.255139 (0.093850) | 0.362769 / 0.283200 (0.079569) | 0.030087 / 0.141683 (-0.111596) | 1.480748 / 1.452155 (0.028594) | 1.580413 / 1.492716 (0.087697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205804 / 0.018006 (0.187798) | 0.455386 / 0.000490 (0.454897) | 0.003134 / 0.000200 (0.002934) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030252 / 0.037411 (-0.007159) | 0.087566 / 0.014526 (0.073041) | 0.098209 / 0.176557 (-0.078347) | 0.155816 / 0.737135 (-0.581319) | 0.098938 / 0.296338 (-0.197401) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386688 / 0.215209 (0.171479) | 3.852777 / 2.077655 (1.775123) | 1.938688 / 1.504120 (0.434568) | 1.779234 / 1.541195 (0.238039) | 1.864262 / 1.468490 (0.395772) | 0.482472 / 4.584777 (-4.102305) | 3.658060 / 3.745712 (-0.087652) | 5.206489 / 5.269862 (-0.063373) | 3.262498 / 4.565676 (-1.303179) | 0.057523 / 0.424275 (-0.366752) | 0.007365 / 0.007607 (-0.000242) | 0.466886 / 0.226044 (0.240841) | 4.671026 / 2.268929 (2.402097) | 2.380357 / 55.444624 (-53.064268) | 2.096590 / 6.876477 (-4.779887) | 2.274415 / 2.142072 (0.132342) | 0.579705 / 4.805227 (-4.225522) | 0.134522 / 6.500664 (-6.366142) | 0.062232 / 0.075469 (-0.013237) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245965 / 1.841788 (-0.595823) | 20.115180 / 8.074308 (12.040872) | 14.602983 / 10.191392 (4.411591) | 0.146890 / 0.680424 (-0.533533) | 0.018424 / 0.534201 (-0.515777) | 0.393941 / 0.579283 (-0.185342) | 0.413785 / 0.434364 (-0.020579) | 0.453344 / 0.540337 (-0.086993) | 0.655446 / 1.386936 (-0.731490) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006807 / 0.011353 (-0.004546) | 0.004083 / 0.011008 (-0.006925) | 0.065389 / 0.038508 (0.026881) | 0.081056 / 0.023109 (0.057947) | 0.362823 / 0.275898 (0.086925) | 0.401928 / 0.323480 (0.078448) | 0.005452 / 0.007986 (-0.002533) | 0.003413 / 0.004328 (-0.000915) | 0.065238 / 0.004250 (0.060987) | 0.057264 / 0.037052 (0.020211) | 0.375713 / 0.258489 (0.117224) | 0.407858 / 0.293841 (0.114017) | 0.031580 / 0.128546 (-0.096966) | 0.008643 / 0.075646 (-0.067003) | 0.071693 / 0.419271 (-0.347578) | 0.049392 / 0.043533 (0.005859) | 0.370194 / 0.255139 (0.115055) | 0.384647 / 0.283200 (0.101447) | 0.024805 / 0.141683 (-0.116877) | 1.509511 / 1.452155 (0.057356) | 1.560193 / 1.492716 (0.067477) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.234442 / 0.018006 (0.216436) | 0.458818 / 0.000490 (0.458329) | 0.000407 / 0.000200 (0.000207) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031661 / 0.037411 (-0.005750) | 0.093143 / 0.014526 (0.078618) | 0.102205 / 0.176557 (-0.074352) | 0.155850 / 0.737135 (-0.581286) | 0.104345 / 0.296338 (-0.191994) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419641 / 0.215209 (0.204432) | 4.200808 / 2.077655 (2.123153) | 2.218227 / 1.504120 (0.714107) | 2.052604 / 1.541195 (0.511409) | 2.150611 / 1.468490 (0.682121) | 0.482665 / 4.584777 (-4.102112) | 3.606541 / 3.745712 (-0.139172) | 3.310637 / 5.269862 (-1.959224) | 2.070200 / 4.565676 (-2.495476) | 0.056586 / 0.424275 (-0.367689) | 0.007826 / 0.007607 (0.000218) | 0.491037 / 0.226044 (0.264992) | 4.901538 / 2.268929 (2.632610) | 2.676402 / 55.444624 (-52.768223) | 2.363935 / 6.876477 (-4.512542) | 2.587813 / 2.142072 (0.445741) | 0.579302 / 4.805227 (-4.225926) | 0.132792 / 6.500664 (-6.367873) | 0.061865 / 0.075469 (-0.013604) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354315 / 1.841788 (-0.487473) | 20.874516 / 8.074308 (12.800208) | 14.863559 / 10.191392 (4.672167) | 0.183635 / 0.680424 (-0.496789) | 0.018636 / 0.534201 (-0.515565) | 0.395317 / 0.579283 (-0.183966) | 0.410598 / 0.434364 (-0.023766) | 0.476485 / 0.540337 (-0.063853) | 0.643246 / 1.386936 (-0.743690) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2763/comments | https://api.github.com/repos/huggingface/datasets/issues/2763/events | https://github.com/huggingface/datasets/issues/2763 | 961,895,523 | MDU6SXNzdWU5NjE4OTU1MjM= | 2,763 | English wikipedia datasets is not clean | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-08-05T14:37:24Z | 2023-07-25T17:43:04Z | 2023-07-25T17:43:04Z | null | ## Describe the bug
Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset
w = load_dataset('wikipedia', '20200501.en')
print(w['train'][0]['text'])
```
> 'Yangliuqing () is a market town in Xiqing District, in the western suburbs of Tianjin, People\'s Republic of China. Despite its relatively small size, it has been named since 2006 in the "famous historical and cultural market towns in China".\n\nIt is best known in China for creating nianhua or Yangliuqing nianhua. For more than 400 years, Yangliuqing has in effect specialised in the creation of these woodcuts for the New Year. wood block prints using vivid colourschemes to portray traditional scenes of children\'s games often interwoven with auspiciouse objects.\n\n, it had 27 residential communities () and 25 villages under its administration.\n\nShi Family Grand Courtyard\n\nShi Family Grand Courtyard (Tiānjīn Shí Jiā Dà Yuàn, 天津石家大院) is situated in Yangliuqing Town of Xiqing District, which is the former residence of wealthy merchant Shi Yuanshi - the 4th son of Shi Wancheng, one of the eight great masters in Tianjin. First built in 1875, it covers over 6,000 square meters, including large and small yards and over 200 folk houses, a theater and over 275 rooms that served as apartments and places of business and worship for this powerful family. Shifu Garden, which finished its expansion in October 2003, covers 1,200 square meters, incorporates the elegance of imperial garden and delicacy of south garden. Now the courtyard of Shi family covers about 10,000 square meters, which is called the first mansion in North China. Now it serves as the folk custom museum in Yangliuqing, which has a large collection of folk custom museum in Yanliuqing, which has a large collection of folk art pieces like Yanliuqing New Year pictures, brick sculpture.\n\nShi\'s ancestor came from Dong\'e County in Shandong Province, engaged in water transport of grain. As the wealth gradually accumulated, the Shi Family moved to Yangliuqing and bought large tracts of land and set up their residence. Shi Yuanshi came from the fourth generation of the family, who was a successful businessman and a good household manager, and the residence was thus enlarged for several times until it acquired the present scale. It is believed to be the first mansion in the west of Tianjin.\n\nThe residence is symmetric based on the axis formed by a passageway in the middle, on which there are four archways. On the east side of the courtyard, there are traditional single-story houses with rows of rooms around the four sides, which was once the living area for the Shi Family. The rooms on north side were the accountants\' office. On the west are the major constructions including the family hall for worshipping Buddha, theater and the south reception room. On both sides of the residence are side yard rooms for maids and servants.\n\nToday, the Shi mansion, located in the township of Yangliuqing to the west of central Tianjin, stands as a surprisingly well-preserved monument to China\'s pre-revolution mercantile spirit. It also serves as an on-location shoot for many of China\'s popular historical dramas. Many of the rooms feature period furniture, paintings and calligraphy, and the extensive Shifu Garden.\n\nPart of the complex has been turned into the Yangliuqing Museum, which includes displays focused on symbolic aspects of the courtyards\' construction, local folk art and customs, and traditional period furnishings and crafts.\n\n**See also \n\nList of township-level divisions of Tianjin\n\nReferences \n\n http://arts.cultural-china.com/en/65Arts4795.html\n\nCategory:Towns in Tianjin'**
## Expected results
I expect no junk in the data.
## Actual results
Specify the actual results or traceback.
## Environment info
- `datasets` version: 1.10.2
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2763/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2763/timeline | null | completed | null | null | false | [
"Hi ! Certain users might need these data (for training or simply to explore/index the dataset).\r\n\r\nFeel free to implement a map function that gets rid of these paragraphs and process the wikipedia dataset with it before training"
] |
https://api.github.com/repos/huggingface/datasets/issues/2844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2844/comments | https://api.github.com/repos/huggingface/datasets/issues/2844/events | https://github.com/huggingface/datasets/pull/2844 | 981,382,806 | MDExOlB1bGxSZXF1ZXN0NzIxNDQzMjY2 | 2,844 | Fix: wikicorpus - fix keys | [] | closed | false | null | 1 | 2021-08-27T15:56:06Z | 2021-09-06T14:07:28Z | 2021-09-06T14:07:27Z | null | As mentioned in https://github.com/huggingface/datasets/issues/2552, there is a duplicate keys error in `wikicorpus`.
I fixed that by taking into account the file index in the keys | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2844/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2844/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2844.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2844",
"merged_at": "2021-09-06T14:07:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2844.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2844"
} | true | [
"The CI error is unrelated to this PR\r\n\r\n... merging !"
] |
https://api.github.com/repos/huggingface/datasets/issues/4929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4929/comments | https://api.github.com/repos/huggingface/datasets/issues/4929/events | https://github.com/huggingface/datasets/pull/4929 | 1,361,508,366 | PR_kwDODunzps4-WK2w | 4,929 | Fixes a typo in loading documentation | [] | closed | false | null | 0 | 2022-09-05T07:18:54Z | 2022-09-06T02:11:03Z | 2022-09-05T13:06:38Z | null | As show in the [documentation page](https://huggingface.co/docs/datasets/loading) here the `"tr"in` should be `"train`.

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4929/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4929/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4929.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4929",
"merged_at": "2022-09-05T13:06:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4929.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4929"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1904/comments | https://api.github.com/repos/huggingface/datasets/issues/1904/events | https://github.com/huggingface/datasets/pull/1904 | 811,260,904 | MDExOlB1bGxSZXF1ZXN0NTc1ODE4MjA0 | 1,904 | Fix to_pandas for boolean ArrayXD | [] | closed | false | null | 1 | 2021-02-18T16:30:46Z | 2021-02-18T17:10:03Z | 2021-02-18T17:10:01Z | null | As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`.
zero copy is available for all primitive types except booleans
see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy
and https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22
cc @SBrandeis | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1904/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1904/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1904.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1904",
"merged_at": "2021-02-18T17:10:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1904.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1904"
} | true | [
"Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4303/comments | https://api.github.com/repos/huggingface/datasets/issues/4303/events | https://github.com/huggingface/datasets/pull/4303 | 1,230,867,728 | PR_kwDODunzps43j8cH | 4,303 | Fix: Add missing comma | [] | closed | false | null | 1 | 2022-05-10T09:21:38Z | 2022-05-11T08:50:15Z | 2022-05-11T08:50:14Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4303/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4303/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4303.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4303",
"merged_at": "2022-05-11T08:50:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4303.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4303"
} | true | [
"The CI failure is unrelated to this PR and fixed on master, merging :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1442/comments | https://api.github.com/repos/huggingface/datasets/issues/1442/events | https://github.com/huggingface/datasets/pull/1442 | 761,026,069 | MDExOlB1bGxSZXF1ZXN0NTM1NzU2Nzgx | 1,442 | Create XML dummy data without loading all dataset in memory | [] | closed | false | null | 0 | 2020-12-10T08:32:07Z | 2020-12-17T09:59:43Z | 2020-12-17T09:59:43Z | null | While I was adding one XML dataset, I noticed that all the dataset was loaded in memory during the dummy data generation process (using nearly all my laptop RAM).
Looking at the code, I have found that the origin is the use of `ET.parse()`. This method loads **all the file content in memory**.
In order to fix this, I have refactorized the code and use `ET.iterparse()` instead, which **parses the file content incrementally**.
I have also implemented a test. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1442/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1442/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1442.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1442",
"merged_at": "2020-12-17T09:59:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1442.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1442"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2092 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2092/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2092/comments | https://api.github.com/repos/huggingface/datasets/issues/2092/events | https://github.com/huggingface/datasets/issues/2092 | 836,984,043 | MDU6SXNzdWU4MzY5ODQwNDM= | 2,092 | How to disable making arrow tables in load_dataset ? | [] | closed | false | null | 7 | 2021-03-21T04:50:07Z | 2022-06-01T16:49:52Z | 2022-06-01T16:49:52Z | null | Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2092/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2092/timeline | null | completed | null | null | false | [
"Hi ! We plan to add streaming features in the future.\r\n\r\nThis should allow to load a dataset instantaneously without generating the arrow table. The trade-off is that accessing examples from a streaming dataset must be done in an iterative way, and with an additional (but hopefully minor) overhead.\r\nWhat do you think about this ?\r\n\r\nIf you have ideas or suggestions of what you expect from such features as a user, feel free to share them, this is really valuable to us !",
"People mainly want this feature either because it takes too much time too make arrow tables, or they occupy too much memory on the disk. I think both the problem can be solved if we provide arrow tables themselves on datasets hub. Can we do this currently @lhoestq ? \r\n",
"@lhoestq I think the ```try_from_hf_gcs``` provide the same functionality. What all datasets are available on HF GCS? Are all the datasets on huggingFace datasets hub are made available on GCS, automatically?",
"Only datasets like wikipedia, wiki40b, wiki_dpr and natural questions are available already processed on the HF google storage. This is used to download directly the arrow file instead of building it from the original data files.",
"@lhoestq How can we make sure that the data we upload on HuggingFace hub is available in form of preprocessed arrow files ?",
"We're still working on this :) This will be available soon\r\nUsers will be able to put their processed arrow files on the Hub",
"Hi! You can now use `Dataset.push_to_hub` to store preprocessed files on the Hub.\r\n\r\nAnd to avoid downloading preprocessed files, you can use streaming by setting `streaming=True` in `load_dataset`."
] |
https://api.github.com/repos/huggingface/datasets/issues/793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/793/comments | https://api.github.com/repos/huggingface/datasets/issues/793/events | https://github.com/huggingface/datasets/pull/793 | 735,105,907 | MDExOlB1bGxSZXF1ZXN0NTE0NTU2NzY5 | 793 | [Datasets] fix discofuse links | [] | closed | false | null | 0 | 2020-11-03T08:03:45Z | 2020-11-03T08:16:41Z | 2020-11-03T08:16:40Z | null | The discofuse links were changed: https://github.com/google-research-datasets/discofuse/commit/d27641016eb5b3eb2af03c7415cfbb2cbebe8558.
The old links are broken
I changed the links and created the new dataset_infos.json.
Pinging @thomwolf @lhoestq for notification. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/793/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/793/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/793.diff",
"html_url": "https://github.com/huggingface/datasets/pull/793",
"merged_at": "2020-11-03T08:16:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/793.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/793"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1248/comments | https://api.github.com/repos/huggingface/datasets/issues/1248/events | https://github.com/huggingface/datasets/pull/1248 | 758,454,438 | MDExOlB1bGxSZXF1ZXN0NTMzNjI0ODY5 | 1,248 | Update step-by-step guide about the dataset cards | [] | closed | false | null | 0 | 2020-12-07T12:12:12Z | 2020-12-07T13:19:24Z | 2020-12-07T13:19:23Z | null | Small update in the step-by-step guide about the dataset cards to indicate it can be created and completing while exploring the dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1248/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1248/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1248.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1248",
"merged_at": "2020-12-07T13:19:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1248.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1248"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/318 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/318/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/318/comments | https://api.github.com/repos/huggingface/datasets/issues/318/events | https://github.com/huggingface/datasets/pull/318 | 646,682,840 | MDExOlB1bGxSZXF1ZXN0NDQwOTExOTYy | 318 | Multitask | [] | closed | false | null | 18 | 2020-06-27T13:27:29Z | 2022-07-06T15:19:57Z | 2022-07-06T15:19:57Z | null | Following our discussion in #217, I've implemented a first working version of `MultiDataset`.
There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage.
I've implemented many of the `nlp.Dataset` methods (cache_files, columns, nbytes, num_columns, num_rows, column_names, schema, shape). Some of the other methods are complicated as they change the number of examples. These raise `NotImplementedError`s at the moment.
This will need some tests which I haven't written yet.
There's definitely room for improvements but I think the general approach is sound. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/318/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/318/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/318.diff",
"html_url": "https://github.com/huggingface/datasets/pull/318",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/318.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/318"
} | true | [
"It's definitely going in the right direction ! Thanks for giving it a try\r\n\r\nI really like the API.\r\nIMO it's fine right now if we don't have all the dataset transforms (map, filter, etc.) as it can be done before building the multitask dataset, but it will be important to have them in the end.\r\nAll the formatting methods could easily be added though.\r\n\r\nI think there are some parts that will require some work with apache arrow like slicing. I can find a way to do it using pyarrow tables concatenation (I did something similar when implementing `__getitem__` with an input that is a list of indices [here](https://github.com/huggingface/nlp/pull/322/files#diff-73270df8d7f08c62a27e40806e1a5fb0R463-R469)). It is very fast and it allows to have the same output format as a normal Dataset.\r\n\r\nAlso maybe we should check that not only the columns but also the schemas match ?\r\nAnd maybe add the `seed` of the shuffling step as an argument ?\r\n\r\n",
"Maybe we should remove the methods that are not implemented for now, WDYT @thomwolf ?",
"That's an interesting first draft, thanks a lot for that and the user facing API is really nice.\r\n\r\nI think we should dive more into this and the questions of #217 before merging the first version though.\r\n\r\nIn particular, the typical way to do multi-tasking is usually to sample a task and then sample a batch within the selected task. I think we should probably stay be closer to this traditional approach, or at least make it very easy to do, rather than go to close to the T5 approach which is very specific to this paper.\r\n\r\nIn this regard, it seems important to find some way to address the remarks of @zphang. I'm still wondering if we should not adopt more of a sampling approach rather than an iteration approach.",
"@thomwolf Thanks! I mainly wanted to get something working quickly for my own MTL research. I agree with a lot of the points you made so I'll convert this pull request back to a draft.\r\n\r\nFor your specific point about 'batch-level' multitask mixing, it would be a pretty trivial change to add a `batch_size` parameter and ensure every `batch_size` examples are from the same task. This would certainly work, but would add a notion of 'batches' to a Dataset, which does feel like a 'Sampler-level' concept and not a Dataset one. There's also the possibility of wanting some specific task-level sampling functionality (e.g. applying `SortishSampler` to each task) which would only work with this kind of 2 step sampling approach. My first proposal in the transformers repo was actually a Sampler https://github.com/huggingface/transformers/issues/4340. I wonder whether functionality at the sampler-level has a place in the vision for the `nlp` repo?\r\n\r\nI imagine following a sampling approach you'd have to abandon maintaining the same user-facing API as a standard dataset (A shame because replacing a single dataset seamlessly with a multitask one is a really nice user-experience).\r\n\r\nRandom half-Idea: You could have a class which accepts a list of any iterables (either a Dataset or a DataLoader which already is doing the batching). Not sure what interface you'd present though. hmmm. \r\n\r\nThere's definitely more discussion to have. \r\n",
"Are there any updates on making multi-task learning more officially supported in the datasets/transformers libraries? \r\nGiven that many papers use more than one task, it would be great to have multi-task learning more officially supported and easier to use. There are a few notebooks/blogs about using HF Transformers for this, but they all mention that it's more of a hack and not really officially supported (e.g. [this notebook](https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb#scrollTo=xW8bnTgCsx5c), or [this blog](https://medium.com/@shahrukhx01/multi-task-learning-with-transformers-part-1-multi-prediction-heads-b7001cf014bf)). \r\n\r\n[jiant](https://github.com/nyu-mll/jiant) was a framework built on transformers that made multi-task learning a first class feature of the library until recently, but they stopped maintaining their library a month ago ([see here](https://github.com/nyu-mll/jiant)). \r\nThis could be a good reason to increase support from the HF team? @lhoestq @thomwolf \r\n\r\nI'm not advanced enough to contribute on this, but an up-to-date notebook showing how to train a model e.g. on both MLM and next-sentence-prediction would already be very useful!",
"> Are there any updates on making multi-task learning more officially supported in the datasets/transformers libraries? Given that many papers use more than one task, it would be great to have multi-task learning more officially supported and easier to use. There are a few notebooks/blogs about using HF Transformers for this, but they all mention that it's more of a hack and not officially supported (e.g. [this notebook](https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb#scrollTo=xW8bnTgCsx5c), or [this blog](https://medium.com/@shahrukhx01/multi-task-learning-with-transformers-part-1-multi-prediction-heads-b7001cf014bf)).\r\n> \r\n> [jiant](https://github.com/nyu-mll/jiant) was a framework built on transformers that made multi-task learning a first class feature of the library until recently, but they stopped maintaining their library a month ago. This could be a good reason to increase support from the HF team? @lhoestq\r\n> \r\n> I'm not advanced enough to contribute on this, but an up-to-date notebook showing how to train a model e.g. on both MLM and NSP would already be very useful!\r\n\r\nI kinda stopped working on this as I didn't really get any response on an actual workable solution.\r\n\r\nThe problem that I came up against after initially being redirected here after [proposing this in the transformers repo](https://github.com/huggingface/transformers/issues/4340) ([among](https://github.com/huggingface/transformers/issues/6872) [others](https://github.com/huggingface/transformers/issues/1856)) , was the request be able to do the multitask mixing at the batch level as well as at the level of individual examples. As this repo doesn't really have the concept of 'batches' it would need to be implemented in the transformers repo, rather than here. You could then pick which level to do your multitask learning on.\r\n\r\nWork on T5 and as of last week, on [exT5](https://arxiv.org/pdf/2111.10952.pdf), have shown that multitask mixing on the example level works incredibly well (with a big enough batch size), so if you're ok doing that, then this pull request works.\r\n\r\nI completely agree that multitask learning is a vital part of modern NLP, nearly every piece of research code I write has at least some aspect of multitask learning (currently using this patch). Many of the top GLUE and SuperGLUE submissions are using some aspect of mutlitask learning. We need to support it.",
"Fully agree. Batching and data loading is one important thing. The part I'm struggling with right now is the classification head (which is more part of the Transformers repo, but also essential for multi-task learning). @ghomasHudson, how do you tune two classification heads simultaneously? Say, when I want to fine-tune an existing base-model on some classification task (like NLI, or next-sentence-prediction) and at the same time add some MLM for regularisation & domain adaptation. In this case I need two classification heads, but I don't know how to switch them between the batches. ",
"> Fully agree. Batching and data loading is one important thing. The part I'm struggling with right now is the classification head (which is more part of the Transformers repo, but also essential for multi-task learning). @ghomasHudson, how do you tune two classification heads simultaneously? Say, when I want to fine-tune an existing base-model on some classification task (like NLI, or next-sentence-prediction) and at the same time add some MLM for regularisation & domain adaptation. In this case I need two classification heads, but I don't know how to switch them between the batches.\r\n\r\nThis pull request is mainly focused on getting the data in the right format, but you're right that there's no easy way to pick between the heads without something like jiant. You could of course replicate this functionality yourself - probably by making a class that implements the functionality of both `ModelNameForSequenceClassification` or `ModelNameForMaskedLM` picking between them depending on some task parameter you add to the forward pass. \r\n\r\njiant make this approach model agnostic by [ignoring the custom per-model head implementations of huggingface](https://github.com/nyu-mll/jiant/blob/386d4e726a27becda1b03c241f064eb13c54860f/jiant/proj/main/modeling/heads.py#L17-L18), instead making generic versions. Then the jiant code [passes a `task` parameter](https://github.com/nyu-mll/jiant/blob/386d4e726a27becda1b03c241f064eb13c54860f/jiant/proj/main/modeling/primary.py#L107-L109) into their [JiantModel](https://github.com/nyu-mll/jiant/blob/386d4e726a27becda1b03c241f064eb13c54860f/jiant/proj/main/modeling/primary.py#L36-L79) wrapper. To implement this in huggingface transformers would require quite a few modifications to the current approach (potentially interfering with some other project aims e.g. code readability), so you might find it tricky to get a change like that accepted. It would be super cool though.\r\n\r\nAnd there's of course the exT5 way of doing things too where you sidestep this issue entirely by treating both tasks as text-to-text problems so you can end up with 100% shared parameters, e.g.\r\nMLM: `Lorem <mask_0> amet, consectetur <mask_1> do eiusmod tempor incididunt ut labore <mask_2>`\r\nNLI: `Premise: The Old One always comforted Ca'daan, except today. hypothesis: Ca'daan knew the Old One very well.`\r\nThis also allows you to do mixed batches of both tasks.\r\n\r\nPersonally, my research mainly focuses on this last approach, using the structure of the data itself to indicate the task rather than swapping in and out different parts of the network.",
"Hi! `jiant` maintainer here, don't have much to add to the conversation yet but I'm happy to share my experience/thoughts on working with Multitask models if people have questions.",
"Hi ! I think it could be easier to simply share as examples in `transformers` some code that uses `jiant` and/or subclass/reimplement some part of `transformers` for multitask ?",
"> Hi ! I think it could be easier to simply share as examples in `transformers` some code that uses `jiant` and/or subclass/reimplement some part of `transformers` for multitask ?\r\n\r\nWell since `jiant` requires new huggingface models to be explicitly added (as there are [\"subtle differences in the models that jiant must abstract\"](https://github.com/nyu-mll/jiant/blob/master/guides/models/adding_models.md)), and isn't being maintained anymore, then the first option might be out of date quickly.\r\n\r\nIf `transformers` could move towards making the task-specific heads more generic and as well as [creating a new base model in the `__init__` method](https://github.com/huggingface/transformers/blob/43f953cc2eec804eba04e2a9ae164d1a33fd97a8/src/transformers/models/bert/modeling_bert.py#L1502), allowing it to be passed as an argument (along with other little tweaks to standardize the approach), then this functionality could be moved into `transformers` itself.\r\n\r\nIt does seem a little redundant to have `jiant` as a library abstracting all the idiosyncrasies of each model type, where this could be done directly in the `transformers` repo in a single place alongside the model.\r\n\r\nIt's not an easy problem to solve though, especially balanced with the desire to expose models with minimal abstraction. @zphang probably knows more about this than me though.",
"As mentioned, one of the main obstacles is that HF/T doesn't support generic heads. At first glance, this should be easy, since the interface is quite simple: models output both a token-wise and a sequence representation (e.g. `[CLS]`), and heads use either one and output the corresponding predictions/losses.\r\n\r\nHowever, there are a number of cases where this doesn't work. One of them is multiple-choice tasks like HellaSwag, which is a multiple choice task with 4 text options. The way this is normally formatted is that you encode `context + question + option_X` for X=1..4, and then score all four options based on a scoring head and pick the highest scoring option as the prediction. This requires you to run the encoder on 4 separate inputs, which breaks the above abstraction (the task-specific model might need to call the encoder multiple times).\r\n\r\nAnother thing is batching. You can imagine with the above that you might want a different batch size for multiple-choice tasks compared to simpler classification tasks. This means you need task-specific batching as well. In addition, [it's been shown](https://arxiv.org/abs/2101.11038) that you really want to mix tasks within a single batch. This also leads into issues like how you want to sample different task examples, early stopping on them, how to mix the validation scores, etc. (`jiant` addressed these, through probably more-complicated-than-necessary configurations.)\r\n\r\nNone of these are insurmountable problems, but it requires some tweaking of the current code layout to get it to work. I would guess that it wouldn't take much work to get a 90% implementation.",
"> Another thing is batching. You can imagine with the above that you might want a different batch size for multiple-choice tasks compared to simpler classification tasks. This means you need task-specific batching as well. In addition, [it's been shown](https://arxiv.org/abs/2101.11038) that you really want to mix tasks within a single batch. This also leads into issues like how you want to sample different task examples, early stopping on them, how to mix the validation scores, etc. (`jiant` addressed these, through probably more-complicated-than-necessary configurations.)\r\n\r\nThat's reassuring. exT5 find the same thing - that mixing tasks together in a batch gives better performance (provided the batch size is big enough that each batch contains a mix of different tasks). Assuming this, we can ignore doing things at the batch-level and just do this at the individual example level - in which case this pull request already does the data mixing part of the problem! Balancing different tasks could easily be added here by implementing temperature-scaled mixing, custom weights, etc...\r\n\r\nTo make a generic implementation of this using different heads would be hard (impossible?) without doing the sub-batching that Muppet do - in which case we're back at dealing with the 'batch' (sub-batch) level which would need an implementation in `transformers` not here.\r\n\r\n",
"Mixing at example should work fine. One issue though is that, as mentioned above, different tasks maybe actually require different amounts of memory, so downstream the user would have to find some way to handle that. But this might be one of those \"the last/edge-case 10% is the hardest\" to handle kind of deals.",
"Very true - there's always going to be those cases. I also feel that the way things are going, if we just leave this for a few years no one will be wanting to use task-specific heads anymore - it'll all be task prompts included in the input a-la GPT, T5, etc... which will make this substantially simpler to implement.\r\n\r\nIt's quite tricky to make a suitably non-opinionated generic version of this at the moment.",
"> Is there an advantage to varying the proportions of each task in each batch\r\n\r\nSome tasks have much less data than others. E.g. SNLI vs. CoLA is almost a 100x difference, so people often sample differently-sized tasks differently.",
"As a short-term solution, I like @lhoestq's suggestion to create a notebook that shows how to implement multi-task learning by subclassing some transformer & dataset classes in a general way. I've been trying to get @zphang's [great but old notebook](https://colab.research.google.com/github/zphang/zphang.github.io/blob/master/files/notebooks/Multi_task_Training_with_Transformers_NLP.ipynb#scrollTo=CQ39AbTAPAUi) on multi-task learning running today and I didn't get it to work, probably because it was implemented a long time ago with `transformers==2.11`, `torch==1.2`~ etc and installing older versions still caused errors.\r\nThere is also this [interesting new repo](https://github.com/shahrukhx01/multitask-learning-transformers), which has a cool way of enabling you to save and load a model with two classification heads ([see model here](https://huggingface.co/shahrukhx01/bert-multitask-query-classifiers) and blog post [here](https://medium.com/@shahrukhx01/multi-task-learning-with-transformers-part-1-multi-prediction-heads-b7001cf014bf)). Haven't tried it yet, but it only uses `BertForSequenceClassification` instead of the more general AutoModelForXYZ\r\n\r\n@zphang, would you maybe be up for contributing an updated version of your older notebook with the latest version of `transformers` and `datasets` which runs in today's colabs? I feel like this would be very helpful for the community and if you keep the classes/functions somewhat general, people can easily adapt it to their use cases! 🙏 :) \r\nWould be a great addition to the [HF notebooks](https://huggingface.co/docs/transformers/notebooks).\r\n\r\nIn the medium-term, I agree that it would be great to have more native support for this via the HF libraries. I feels weird that you can neither train the old BERT (trained on two tasks) nor any of the newer models, without some hacks. ",
"@zphang would love to see the newer notebook as suggested by @MoritzLaurer "
] |
https://api.github.com/repos/huggingface/datasets/issues/4286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4286/comments | https://api.github.com/repos/huggingface/datasets/issues/4286/events | https://github.com/huggingface/datasets/pull/4286 | 1,226,758,621 | PR_kwDODunzps43W-DI | 4,286 | Add Lahnda language tag | [] | closed | false | null | 1 | 2022-05-05T14:34:20Z | 2022-05-10T12:10:04Z | 2022-05-10T12:02:38Z | null | This language is present in [Wikimedia's WIT](https://huggingface.co/datasets/wikimedia/wit_base) dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4286/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4286/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4286.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4286",
"merged_at": "2022-05-10T12:02:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4286.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4286"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2174/comments | https://api.github.com/repos/huggingface/datasets/issues/2174/events | https://github.com/huggingface/datasets/pull/2174 | 851,383,675 | MDExOlB1bGxSZXF1ZXN0NjA5ODE2OTQ2 | 2,174 | Pin docutils for better doc | [] | closed | false | null | 0 | 2021-04-06T12:40:20Z | 2021-04-06T12:55:53Z | 2021-04-06T12:55:53Z | null | The latest release of docutils make the navbar in the documentation weird and the Markdown wrongly interpreted:

We had the same problem in Transformers and solved it by pinning docutils (a dep of sphinx).
You can see the version after the change [here](https://32769-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2174/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2174/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2174.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2174",
"merged_at": "2021-04-06T12:55:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2174.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2174"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/600/comments | https://api.github.com/repos/huggingface/datasets/issues/600/events | https://github.com/huggingface/datasets/issues/600 | 697,496,913 | MDU6SXNzdWU2OTc0OTY5MTM= | 600 | Pickling error when loading dataset | [] | closed | false | null | 5 | 2020-09-10T06:28:08Z | 2020-09-25T14:31:54Z | 2020-09-25T14:31:54Z | null | Hi,
I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as:
```
# line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
dataset = load_dataset("text", data_files=file_path, split="train")
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True,
truncation=True, max_length=args.block_size), batched=True)
dataset.set_format(type='torch', columns=['input_ids'])
return dataset
```
When I run this with transformers (3.1.0) and nlp (0.4.0), I get the following error:
```
Traceback (most recent call last):
File "src/run_language_modeling.py", line 319, in <module>
main()
File "src/run_language_modeling.py", line 248, in main
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
File "src/run_language_modeling.py", line 139, in get_dataset
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True)
File "/data/nlp/src/nlp/arrow_dataset.py", line 1136, in map
new_fingerprint=new_fingerprint,
File "/data/nlp/src/nlp/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/data/nlp/src/nlp/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/data/nlp/src/nlp/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/data/nlp/src/nlp/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/data/nlp/src/nlp/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/data/nlp/src/nlp/utils/py_utils.py", line 362, in dumps
dump(obj, file)
File "/data/nlp/src/nlp/utils/py_utils.py", line 339, in dump
Pickler(file, recurse=True).dump(obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 446, in dump
StockPickler.dump(self, obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 409, in dump
self.save(obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1438, in save_function
obj.__dict__, fkwdefaults), obj=obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1170, in save_cell
pickler.save_reduce(_create_cell, (f,), obj=obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 736, in save_tuple
save(element)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 605, in save_reduce
save(cls)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1365, in save_type
obj.__bases__, _dict), obj=obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce
save(args)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 751, in save_tuple
save(element)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 847, in _batch_setitems
save(v)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 507, in save
self.save_global(obj, rv)
File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 927, in save_global
(obj, module_name, name))
_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/600/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/600/timeline | null | completed | null | null | false | [
"When I change from python3.6 to python3.8, it works! ",
"Does it work when you install `nlp` from source on python 3.6?",
"No, still the pickling error.",
"I wasn't able to reproduce on google colab (python 3.6.9 as well) with \r\n\r\npickle==4.0\r\ndill=0.3.2\r\ntransformers==3.1.0\r\ndatasets=1.0.1 (also tried nlp 0.4.0)\r\n\r\nIf I try\r\n\r\n```python\r\nfrom datasets import load_dataset # or from nlp\r\nfrom transformers import BertTokenizer\r\n\r\ntokenizer = BertTokenizer.from_pretrained(\"bert-base-uncased\")\r\ndataset = load_dataset(\"text\", data_files=file_path, split=\"train\")\r\ndataset = dataset.map(lambda ex: tokenizer(ex[\"text\"], add_special_tokens=True,\r\n truncation=True, max_length=512), batched=True)\r\ndataset.set_format(type='torch', columns=['input_ids'])\r\n```\r\nIt runs without error",
"Closing since it looks like it's working on >= 3.6.9\r\nFeel free to re-open if you have other questions :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/601 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/601/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/601/comments | https://api.github.com/repos/huggingface/datasets/issues/601/events | https://github.com/huggingface/datasets/pull/601 | 697,574,848 | MDExOlB1bGxSZXF1ZXN0NDgzNTAzMjAw | 601 | check if trasnformers has PreTrainedTokenizerBase | [] | closed | false | null | 0 | 2020-09-10T07:54:56Z | 2020-09-10T11:01:37Z | 2020-09-10T11:01:36Z | null | Fix #598 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/601/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/601/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/601.diff",
"html_url": "https://github.com/huggingface/datasets/pull/601",
"merged_at": "2020-09-10T11:01:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/601.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/601"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4392/comments | https://api.github.com/repos/huggingface/datasets/issues/4392/events | https://github.com/huggingface/datasets/pull/4392 | 1,244,859,971 | PR_kwDODunzps44RtsX | 4,392 | remove int documentation from logging docs | [] | closed | false | null | 1 | 2022-05-23T09:24:55Z | 2022-05-23T15:16:55Z | 2022-05-23T15:08:32Z | null | Removes the `int` documentation from the [logging section](https://huggingface.co/docs/datasets/package_reference/logging_methods#levels) of the docs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4392/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4392/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4392.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4392",
"merged_at": "2022-05-23T15:08:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4392.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4392"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3669/comments | https://api.github.com/repos/huggingface/datasets/issues/3669/events | https://github.com/huggingface/datasets/pull/3669 | 1,122,335,622 | PR_kwDODunzps4x_OTI | 3,669 | Common voice validated partition | [] | closed | false | null | 7 | 2022-02-02T20:04:43Z | 2022-02-08T17:26:52Z | 2022-02-08T17:23:12Z | null | This patch adds access to the 'validated' partitions of CommonVoice datasets (provided by the dataset creators but not available in the HuggingFace interface yet).
As 'validated' contains significantly more data than 'train' (although it contains both test and validation, so one needs to be careful there), it can be useful to train better models where no strict comparison with the previous work is intended. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3669/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3669/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3669.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3669",
"merged_at": "2022-02-08T17:23:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3669.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3669"
} | true | [
"Hi @patrickvonplaten - could you please advise whether this would be a welcomed change, and if so, who I consult regarding the unit-tests?",
"I'd be happy with adding this change. @anton-l @lhoestq - what do you think?",
"Cool ! I just fixed the tests by adding a dummy `validated.tsv` file in the dummy data archive of common_voice\r\n\r\nI wonder if you should separate the train/valid/test configuration from the validated/invalidated configuration of the splits ? \r\nIn particular having `validated` along with the train/valid/test splits could be a bit weird since it comprises them. We can do that if you think it makes more sense. Otherwise it's also good as it is right now :)\r\n",
"Thanks! I think that there are 2 cases for using the validated partition: 1) trainset = {validated - dev - test}, dev and test as they come; 2) train, dev, and test sampled from validated manually with the desired ratios.\r\nIn either case, I think that it's quite a big change on the HF interface part, so could as well be taken care of in the client code. Or is it not? (In which case, what's the most compact way to implement this?)",
"What's important IMO is to let the users as much flexibility as they need - so we try to not do too much regarding splits to not constrain users. So I guess the way it is right now is ok. Can you confirm that it's ok @patrickvonplaten and that it won't break some speech training script out there ?",
"@lhoestq all split names are explicit in our example scripts, so this shouldn't break anything, feel free to merge :)\r\nI'll go ahead and add this to the official `mozilla-foundation` datasets as well ",
"Good for me! This has no real down-sides IMO and surely won't break any training scripts."
] |
https://api.github.com/repos/huggingface/datasets/issues/1256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1256/comments | https://api.github.com/repos/huggingface/datasets/issues/1256/events | https://github.com/huggingface/datasets/pull/1256 | 758,531,980 | MDExOlB1bGxSZXF1ZXN0NTMzNjkwMTQ2 | 1,256 | adding LiMiT dataset | [] | closed | false | null | 0 | 2020-12-07T14:00:41Z | 2020-12-08T14:58:28Z | 2020-12-08T14:42:51Z | null | Adding LiMiT: The Literal Motion in Text Dataset
https://github.com/ilmgut/limit_dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1256/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1256/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1256.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1256",
"merged_at": "2020-12-08T14:42:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1256.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1256"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1514/comments | https://api.github.com/repos/huggingface/datasets/issues/1514/events | https://github.com/huggingface/datasets/issues/1514 | 764,017,148 | MDU6SXNzdWU3NjQwMTcxNDg= | 1,514 | how to get all the options of a property in datasets | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | 2 | 2020-12-12T16:24:08Z | 2022-05-25T16:27:29Z | 2022-05-25T16:27:29Z | null | Hi
could you tell me how I can get all unique options of a property of dataset?
for instance in case of boolq, if the user wants to know which unique labels it has, is there a way to access unique labels without getting all training data lables and then forming a set i mean? thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1514/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1514/timeline | null | completed | null | null | false | [
"In a dataset, labels correspond to the `ClassLabel` feature that has the `names` property that returns string represenation of the integer classes (or `num_classes` to get the number of different classes).",
"I think the `features` attribute of the dataset object is what you are looking for:\r\n```\r\n>>> dataset.features\r\n{'sentence1': Value(dtype='string', id=None),\r\n 'sentence2': Value(dtype='string', id=None),\r\n 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None),\r\n 'idx': Value(dtype='int32', id=None)\r\n}\r\n>>> dataset.features[\"label\"].names\r\n['not_equivalent', 'equivalent']\r\n```\r\n\r\nFor reference: https://huggingface.co/docs/datasets/exploring.html"
] |
https://api.github.com/repos/huggingface/datasets/issues/4429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4429/comments | https://api.github.com/repos/huggingface/datasets/issues/4429/events | https://github.com/huggingface/datasets/pull/4429 | 1,254,184,358 | PR_kwDODunzps44whxN | 4,429 | Update builder docstring for deprecated/added arguments | [] | closed | false | null | 5 | 2022-05-31T17:37:25Z | 2022-06-08T11:40:18Z | 2022-06-08T11:31:45Z | null | This PR updates the builder docstring with deprecated/added directives for arguments name/config_name.
Follow up of:
- #4414
- huggingface/doc-builder#233
First merge:
- #4432 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4429/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4429/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4429.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4429",
"merged_at": "2022-06-08T11:31:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4429.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4429"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@mishig25 is investigating why deprecated/added do not affect the enclosed text format when used in args docstring: no special formatting appears: \r\n- https://moon-ci-docs.huggingface.co/docs/datasets/pr_4429/en/package_reference/builder_classes#datasets.DatasetBuilder",
"@albertvillanova please check now 👍 \r\nhttps://moon-ci-docs.huggingface.co/docs/datasets/pr_4429/en/package_reference/builder_classes#datasets.DatasetBuilder\r\n\r\n<img width=\"500\" alt=\"Screenshot 2022-06-06 at 10 20 34\" src=\"https://user-images.githubusercontent.com/11827707/172123471-fab97138-c903-4a71-ab7f-c90e5e43c58f.png\">\r\n",
"Thanks @mishig25.\r\n\r\nJust one question: is it expected to have the deprecated box right edge not filling all the page width (contrary to the added box)?",
"> Just one question: is it expected to have the deprecated box right edge not filling all the page width (contrary to the added box)?\r\n\r\nYes, that is expected 😊 because the depreacted box is being bounded by its parent box (the box for `name` argument in the screenshot above)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3569/comments | https://api.github.com/repos/huggingface/datasets/issues/3569/events | https://github.com/huggingface/datasets/pull/3569 | 1,100,478,994 | PR_kwDODunzps4w3XGo | 3,569 | Add the DKTC dataset (Extension of #3564) | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 9 | 2022-01-12T15:31:29Z | 2022-10-01T06:43:05Z | 2022-10-01T06:43:04Z | null | New pull request of #3564. (for DKTC)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3569/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3569/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3569.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3569",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3569.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3569"
} | true | [
"I reflect your comment! @lhoestq ",
"Wait, the format of the data just changed, so I'll take it into consideration and commit it.",
"I update the code according to the dataset structure change.",
"Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).",
"> Thanks ! I think the dummy data are not valid yet - the dummy train.csv file only contains a partial example (the quotes `\"` start but never end).\r\n\r\nHi! @lhoestq There is a problem. \r\n<img src=\"https://user-images.githubusercontent.com/42150335/149804142-3800e635-f5a0-44d9-9694-0c2b0c05f16b.png\" width=500>\r\n \r\nAs shown in the picture above, the conversation is divided into \"\\n\" in the \"conversion\" column. \r\nThat's why there's an error in the file path that only saved only five lines like below. \r\n\r\n```\r\n'idx', 'class', 'conversation'\r\n'0', '협박 대화', '\"지금 너 스스로를 죽여달라고 애원하는 것인가?'\r\n아닙니다. 죄송합니다.'\r\n죽을 거면 혼자 죽지 우리까지 사건에 휘말리게 해? 진짜 죽여버리고 싶게.'\r\n정말 잘못했습니다.\r\n```\r\n \r\nIn fact, these five lines are all one line. \r\n \r\n\r\n",
"Hi ! I see, in this case ca you make sure that the dummy data has a full sample ?\r\n\r\nFeel free to open the dummy train.csv in the dummy_data.zip file and add the missing lines",
"Sorry, I'm late to check! I'll send it to you soon!",
"Thanks for your contribution, @sooftware. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there, under this organization namespace: https://huggingface.co/tunib\r\n\r\nPlease, feel free to tell us if you need some help.",
"Close this PR. Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4992/comments | https://api.github.com/repos/huggingface/datasets/issues/4992/events | https://github.com/huggingface/datasets/pull/4992 | 1,379,031,842 | PR_kwDODunzps4_QVw4 | 4,992 | Support streaming iwslt2017 dataset | [] | closed | false | null | 1 | 2022-09-20T08:35:41Z | 2022-09-20T09:27:55Z | 2022-09-20T09:15:24Z | null | Support streaming iwslt2017 dataset.
Once this PR is merged:
- [x] Remove old ".tgz" data files from the Hub. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4992/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4992/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4992.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4992",
"merged_at": "2022-09-20T09:15:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4992.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4992"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3836/comments | https://api.github.com/repos/huggingface/datasets/issues/3836/events | https://github.com/huggingface/datasets/pull/3836 | 1,161,072,531 | PR_kwDODunzps40Bobr | 3,836 | Logo float left | [] | closed | false | null | 3 | 2022-03-07T08:38:34Z | 2022-03-07T20:21:11Z | 2022-03-07T09:14:11Z | null | <img width="1000" alt="Screenshot 2022-03-07 at 09 35 29" src="https://user-images.githubusercontent.com/11827707/156996422-339ba43e-932b-4849-babf-9321cb99c922.png">
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3836/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3836/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3836.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3836",
"merged_at": "2022-03-07T09:14:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3836.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3836"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3836). All of your documentation changes will be reflected on that endpoint.",
"Weird, the logo doesn't seem to be floating on my side (using Chrome) at https://huggingface.co/docs/datasets/master/en/index",
"https://huggingface.co/docs/datasets/index\r\n\r\nThe needed css change from moon-landing just got deployed"
] |
https://api.github.com/repos/huggingface/datasets/issues/5663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5663/comments | https://api.github.com/repos/huggingface/datasets/issues/5663/events | https://github.com/huggingface/datasets/issues/5663 | 1,637,173,248 | I_kwDODunzps5hlUgA | 5,663 | CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2023-03-23T09:39:43Z | 2023-03-23T10:09:55Z | 2023-03-23T10:09:55Z | null | CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662
```
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_on_disk - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_audio - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_device - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_image - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_jnp_array_kwargs - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
FAILED tests/features/test_features.py::CastToPythonObjectsTest::test_cast_to_python_objects_jax - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions.
===== 8 failed, 2147 passed, 10 skipped, 37 warnings in 228.69s (0:03:48) ======
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5663/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5663/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3229/comments | https://api.github.com/repos/huggingface/datasets/issues/3229/events | https://github.com/huggingface/datasets/pull/3229 | 1,046,706,425 | PR_kwDODunzps4uMKsx | 3,229 | Fix URL in CITATION file | [] | closed | false | null | 0 | 2021-11-07T10:04:35Z | 2021-11-07T10:04:46Z | 2021-11-07T10:04:45Z | null | Currently the BibTeX citation parsed from the CITATION file has wrong URL (it shows the repo URL instead of the proceedings paper URL):
```
@inproceedings{Lhoest_Datasets_A_Community_2021,
author = {Lhoest, Quentin and Villanova del Moral, Albert and von Platen, Patrick and Wolf, Thomas and Šaško, Mario and Jernite, Yacine and Thakur, Abhishek and Tunstall, Lewis and Patil, Suraj and Drame, Mariama and Chaumond, Julien and Plu, Julien and Davison, Joe and Brandeis, Simon and Sanh, Victor and Le Scao, Teven and Canwen Xu, Kevin and Patry, Nicolas and Liu, Steven and McMillan-Major, Angelina and Schmid, Philipp and Gugger, Sylvain and Raw, Nathan and Lesage, Sylvain and Lozhkov, Anton and Carrigan, Matthew and Matussière, Théo and von Werra, Leandro and Debut, Lysandre and Bekman, Stas and Delangue, Clément},
booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
month = {11},
pages = {175--184},
publisher = {Association for Computational Linguistics},
title = {{Datasets: A Community Library for Natural Language Processing}},
url = {https://github.com/huggingface/datasets},
year = {2021}
}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3229/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3229/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3229.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3229",
"merged_at": "2021-11-07T10:04:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3229.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3229"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1255 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1255/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1255/comments | https://api.github.com/repos/huggingface/datasets/issues/1255/events | https://github.com/huggingface/datasets/pull/1255 | 758,530,243 | MDExOlB1bGxSZXF1ZXN0NTMzNjg4Njg2 | 1,255 | [doc] nlp/viewer ➡️datasets/viewer | [] | closed | false | null | 0 | 2020-12-07T13:58:41Z | 2020-12-08T17:17:54Z | 2020-12-08T17:17:53Z | null | cc @srush | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1255/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1255/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1255.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1255",
"merged_at": "2020-12-08T17:17:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1255.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1255"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2736/comments | https://api.github.com/repos/huggingface/datasets/issues/2736/events | https://github.com/huggingface/datasets/issues/2736 | 956,895,199 | MDU6SXNzdWU5NTY4OTUxOTk= | 2,736 | Add Microsoft Building Footprints dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | open | false | null | 1 | 2021-07-30T16:17:08Z | 2021-12-08T12:09:03Z | null | null | ## Adding a Dataset
- **Name:** Microsoft Building Footprints
- **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge.
- **Paper:** *link to the dataset paper if available*
- **Data:** https://www.microsoft.com/en-us/maps/building-footprints
- **Motivation:** this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Reported by: @sashavor | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2736/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2736/timeline | null | null | null | null | false | [
"Motivation: this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. I'll see if I can figure out how to add it!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4028 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4028/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4028/comments | https://api.github.com/repos/huggingface/datasets/issues/4028/events | https://github.com/huggingface/datasets/pull/4028 | 1,181,022,675 | PR_kwDODunzps41B429 | 4,028 | Fix docs on audio feature installation | [] | closed | false | null | 1 | 2022-03-25T16:55:11Z | 2022-03-31T16:20:47Z | 2022-03-31T16:15:20Z | null | This PR:
- Removes the explicit installation of `librosa` (this is installed with `pip install datasets[audio]`
- Adds the warning for Linux users to install manually the non-Python package `libsndfile`
- Explains that the installation of `torchaudio` is only necessary to support loading audio datasets containing MP3 audio files
Related to #4000. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4028/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4028/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4028.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4028",
"merged_at": "2022-03-31T16:15:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4028.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4028"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3815 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3815/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3815/comments | https://api.github.com/repos/huggingface/datasets/issues/3815/events | https://github.com/huggingface/datasets/pull/3815 | 1,158,589,512 | PR_kwDODunzps4z5oq- | 3,815 | Fix iter_archive getting reset | [] | closed | false | null | 0 | 2022-03-03T15:58:52Z | 2022-03-03T18:06:37Z | 2022-03-03T18:06:13Z | null | The `DownloadManager.iter_archive` method currently returns an iterator - which is **empty** once you iter over it once. This means you can't pass the same archive iterator to several splits.
To fix that, I changed the ouput of `DownloadManager.iter_archive` to be an iterable that you can iterate over several times, instead of a one-time-use iterator.
The `StreamingDownloadManager.iter_archive` already returns an appropriate iterable, and the code added in this PR is inspired from the one in `streaming_download_manager.py` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3815/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3815/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3815.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3815",
"merged_at": "2022-03-03T18:06:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3815.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3815"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/583 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/583/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/583/comments | https://api.github.com/repos/huggingface/datasets/issues/583/events | https://github.com/huggingface/datasets/issues/583 | 695,166,265 | MDU6SXNzdWU2OTUxNjYyNjU= | 583 | ArrowIndexError on Dataset.select | [] | closed | false | null | 0 | 2020-09-07T14:36:29Z | 2020-09-08T07:43:15Z | 2020-09-08T07:43:15Z | null | If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0
Example:
```python
from nlp import load_dataset
mnli = load_dataset("glue", "mnli", split="train")
shuffled = mnli.shuffle(seed=42)
mnli.select(list(range(len(mnli))))
```
raises:
```python
---------------------------------------------------------------------------
ArrowIndexError Traceback (most recent call last)
<ipython-input-64-006a5d38d418> in <module>
----> 1 mnli.shuffle(seed=42).select(list(range(len(mnli))))
~/Desktop/hf/nlp/src/nlp/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
~/Desktop/hf/nlp/src/nlp/arrow_dataset.py in select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint)
1653 if self._indices is not None:
1654 if PYARROW_V0:
-> 1655 indices_array = self._indices.column(0).chunk(0).take(indices_array)
1656 else:
1657 indices_array = self._indices.column(0).take(indices_array)
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.Array.take()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowIndexError: take index out of bounds
```
This is because the `take` method is only done on the first chunk which only contains 1000 elements by default (mnli has ~400 000 elements).
Shall we change that to use
```python
pa.concat_tables(self._indices._indices.slice(i, 1) for i in indices_array)
```
instead of `take` ? @thomwolf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/583/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/583/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6054/comments | https://api.github.com/repos/huggingface/datasets/issues/6054/events | https://github.com/huggingface/datasets/issues/6054 | 1,813,271,304 | I_kwDODunzps5sFFMI | 6,054 | Multi-processed `Dataset.map` slows down a lot when `import torch` | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | 1 | 2023-07-20T06:36:14Z | 2023-07-21T15:19:37Z | 2023-07-21T15:19:37Z | null | ### Describe the bug
When using `Dataset.map` with `num_proc > 1`, the speed slows down much if I add `import torch` to the start of the script even though I don't use it.
I'm not sure if it's `torch` only or if any other package that is "large" will also cause the same result.
BTW, `import lightning` also slows it down.
Below are the progress bars of `Dataset.map`, the only difference between them is with or without `import torch`, but the speed varies by 6-7 times.
- without `import torch` 
- with `import torch` 
### Steps to reproduce the bug
Below is the code I used, but I don't think the dataset and the mapping function have much to do with the phenomenon.
```python3
from datasets import load_from_disk, disable_caching
from transformers import AutoTokenizer
# import torch
# import lightning
def rearrange_datapoints(
batch,
tokenizer,
sequence_length,
):
datapoints = []
input_ids = []
for x in batch['input_ids']:
input_ids += x
while len(input_ids) >= sequence_length:
datapoint = input_ids[:sequence_length]
datapoints.append(datapoint)
input_ids[:sequence_length] = []
if input_ids:
paddings = [-1] * (sequence_length - len(input_ids))
datapoint = paddings + input_ids if tokenizer.padding_side == 'left' else input_ids + paddings
datapoints.append(datapoint)
batch['input_ids'] = datapoints
return batch
if __name__ == '__main__':
disable_caching()
tokenizer = AutoTokenizer.from_pretrained('...', use_fast=False)
dataset = load_from_disk('...')
dataset = dataset.map(
rearrange_datapoints,
fn_kwargs=dict(
tokenizer=tokenizer,
sequence_length=2048,
),
batched=True,
num_proc=8,
)
```
### Expected behavior
The multi-processed `Dataset.map` function speed between with and without `import torch` should be the same.
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6054/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6054/timeline | null | completed | null | null | false | [
"A duplicate of https://github.com/huggingface/datasets/issues/5929"
] |
https://api.github.com/repos/huggingface/datasets/issues/5224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5224/comments | https://api.github.com/repos/huggingface/datasets/issues/5224/events | https://github.com/huggingface/datasets/issues/5224 | 1,443,640,867 | I_kwDODunzps5WDDYj | 5,224 | Seems to freeze when loading audio dataset with wav files from local folder | [] | closed | false | null | 4 | 2022-11-10T10:29:31Z | 2023-04-25T09:54:05Z | 2022-11-22T11:24:19Z | null | ### Describe the bug
I'm following the instructions in [https://huggingface.co/docs/datasets/audio_load#audiofolder-with-metadata](url) to be able to load a dataset from a local folder.
I have everything into a folder, into a train folder and then the audios and csv. When I try to load the dataset and run from terminal, seems to work but then freezes with no apparent reason.
The metadata.csv file contains a few columns but the important ones, `file_name` with the filename and `transcription` with the transcription are okay.
The audios are `.wav` files, I don't know if that might be the problem (I will proceed to try to change them all to `.mp3` and try again).
### Steps to reproduce the bug
The code I'm using:
```python
from datasets import load_dataset
dataset = load_dataset("audiofolder", data_dir="../archive/Dataset")
dataset[0]["audio"]
```
The output I obtain:
```
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 311135.43it/s]
Using custom data configuration default-38d4546ffd010f3e
Downloading and preparing dataset audiofolder/default to /Users/mine/.cache/huggingface/datasets/audiofolder/default-38d4546ffd010f3e/0.0.0/6cbdd16f8688354c63b4e2a36e1585d05de285023ee6443ffd71c4182055c0fc...
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 166467.72it/s]
Using custom data configuration default-38d4546ffd010f3e
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 187772.74it/s]
Using custom data configuration default-38d4546ffd010f3e
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 59623.71it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 138090.55it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 106065.64it/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 56036.38it/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 74004.24it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 162343.45it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 101881.23it/s]
Using custom data configuration default-38d4546ffd010f3e
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 60145.67it/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 80890.02it/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 54036.67it/s]
Resolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 95851.09it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 155897.00it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 137656.96it/s]
Resolving data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 439/439 [00:00<00:00, 131230.81it/s]
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
Using custom data configuration default-38d4546ffd010f3e
```
And then here it just freezes and nothing more happens.
### Expected behavior
Load the dataset.
### Environment info
Datasets version:
datasets 2.6.1 pypi_0 pypi
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5224/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5224/timeline | null | completed | null | null | false | [
"I just tried to do the same but changing the `.wav` files to `.mp3` files and that doesn't fix it.",
"I don't know if anyone will ever read this but I've tried to upload the same dataset with google colab and the output seems more clarifying. I didn't specify the train/test split so the dataset wasn't fully uploaded (or that is what I understood, might be wrong!!).\r\n\r\nNow, including the `drop_metadata` flag I can load the dataset normally (at least with colab notebook):\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"audiofolder\", data_dir=\"../archive/Dataset\", , drop_metadata=True)\r\n```\r\n\r\nI'll close the issue.",
"@uriii3 Hello, I understand correctly that you converted your wav files to mp3?",
"Yes but it didn't matter. I don't remember which of them I ended up working with."
] |
https://api.github.com/repos/huggingface/datasets/issues/4586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4586/comments | https://api.github.com/repos/huggingface/datasets/issues/4586/events | https://github.com/huggingface/datasets/pull/4586 | 1,287,105,636 | PR_kwDODunzps46e9xB | 4,586 | Host pn_summary data on the Hub instead of Google Drive | [] | closed | false | null | 1 | 2022-06-28T10:05:05Z | 2022-06-28T14:52:56Z | 2022-06-28T14:42:03Z | null | Fix #4581. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4586/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4586/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4586.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4586",
"merged_at": "2022-06-28T14:42:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4586.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4586"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5716/comments | https://api.github.com/repos/huggingface/datasets/issues/5716/events | https://github.com/huggingface/datasets/issues/5716 | 1,658,613,092 | I_kwDODunzps5i3G1k | 5,716 | Handle empty audio | [] | open | false | null | 1 | 2023-04-07T09:51:40Z | 2023-04-13T17:33:36Z | null | null | Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path?
when a audio is empty, when do resample , it will break:
`array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5716/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5716/timeline | null | null | null | null | false | [
"Hi! Can you share one of the problematic audio files with us?\r\n\r\nI tried to reproduce the error with the following code: \r\n```python\r\nimport soundfile as sf\r\nimport numpy as np\r\nfrom datasets import Audio\r\n\r\nsf.write(\"empty.wav\", np.array([]), 16000)\r\nAudio(sampling_rate=24000).decode_example({\"path\": \"empty.wav\", \"bytes\": None})\r\n```\r\nBut without success.\r\n\r\nAlso, what version of `librosa` is installed in your env? (You can get this info with `python -c \"import librosa; print(librosa.__version__)`)\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5903/comments | https://api.github.com/repos/huggingface/datasets/issues/5903/events | https://github.com/huggingface/datasets/pull/5903 | 1,727,372,549 | PR_kwDODunzps5RbV82 | 5,903 | Relax `ci.yml` trigger for `pull_request` based on modified paths | [] | open | false | null | 2 | 2023-05-26T10:46:52Z | 2023-05-26T10:51:37Z | null | null | ## What's in this PR?
As of a previous PR at #5902, I've seen that the CI was automatically trigger on any file, in that case when modifying a Jupyter Notebook (.ipynb), which IMO could be skipped, as the modification on the Jupyter Notebook has no effect/impact on the `ci.yml` outcome. So this PR controls the paths that trigger the `ci.yml` to avoid wasting resources when not needed.
## What's pending in this PR?
I would like to confirm whether this should affect both `push` and `pull_request`, since just modifications in those files won't change the `ci.yml` outcome, so maybe it's worth skipping it too in the `push` trigger. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5903/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5903/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5903.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5903",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5903.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5903"
} | true | [
"Also this could be extended to the rest of the GitHub Action `yml` files, so let me know whether you want me to have a look into it! 🤗",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5903). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/3666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3666/comments | https://api.github.com/repos/huggingface/datasets/issues/3666/events | https://github.com/huggingface/datasets/pull/3666 | 1,122,058,894 | PR_kwDODunzps4x-ULz | 3,666 | process .opus files (for Multilingual Spoken Words) | [] | closed | false | null | 3 | 2022-02-02T15:21:48Z | 2022-02-22T10:04:03Z | 2022-02-22T10:03:53Z | null | Opus files requires `libsndfile>=1.0.30`. Add check for this version and tests.
**outdated:**
Add [Multillingual Spoken Words dataset](https://mlcommons.org/en/multilingual-spoken-words/)
You can specify multiple languages for downloading 😌:
```python
ds = load_dataset("datasets/ml_spoken_words", languages=["ar", "tt"])
```
1. I didn't take into account that each time you pass a set of languages the data for a specific language is downloaded even if it was downloaded before (since these are custom configs like `ar+tt` and `ar+tt+br`. Maybe that wasn't a good idea?
2. The script will have to be slightly changed after merge of https://github.com/huggingface/datasets/pull/3664
2. Just can't figure out what wrong with dummy files... 😞 Maybe we should get rid of them at some point 😁 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3666/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3666/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3666.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3666",
"merged_at": "2022-02-22T10:03:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3666.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3666"
} | true | [
"@lhoestq I still have problems with processing `.opus` files with `soundfile` so I actually cannot fully check that it works but it should... Maybe this should be investigated in case of someone else would also have problems with that.\r\n\r\nAlso, as the data is in a private repo on the hub (before we come to a decision about audio data privacy), the needed checks cannot be done right now.",
"@lhoestq I check the data redownloading for configs sharing the same languages, you were right: the data is downloaded once for each language. But samples are generated from scratch each time. Is it a supposed behavior? ",
"> But samples are generated from scratch each time. Is it a supposed behavior?\r\n\r\nYea that's the way it works right now, because we generate one arrow file per configuration. Since changing the languages creates a new configuration, then it generates a new arrow file."
] |
https://api.github.com/repos/huggingface/datasets/issues/5486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5486/comments | https://api.github.com/repos/huggingface/datasets/issues/5486/events | https://github.com/huggingface/datasets/issues/5486 | 1,564,059,749 | I_kwDODunzps5dOahl | 5,486 | Adding `sep` to TextConfig | [] | open | false | null | 2 | 2023-01-31T10:39:53Z | 2023-01-31T14:50:18Z | null | null | I have a local a `.txt` file that follows the `CONLL2003` format which I need to load using `load_script`. However, by using `sample_by='line'`, one can only split the dataset into lines without splitting each line into columns. Would it be reasonable to add a `sep` argument in combination with `sample_by='paragraph'` to parse a paragraph into an array for each column ? If so, I am happy to contribute!
## Environment
* `python 3.8.10`
* `datasets 2.9.0`
## Snippet of `train.txt`
```txt
Distribution NN O O
and NN O O
dynamics NN O O
of NN O O
electron NN O B-RP
complexes NN O I-RP
in NN O O
cyanobacterial NN O B-R
membranes NN O I-R
The NN O O
occurrence NN O O
of NN O O
prostaglandin NN O B-R
F2α NN O I-R
in NN O O
Pharbitis NN O B-R
seedlings NN O I-R
grown NN O O
under NN O O
short NN O B-P
days NN O I-P
or NN O I-P
days NN O I-P
```
## Current Behaviour
```python
# defining 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags'] here would fail with `ValueError: Length of names (4) does not match length of arrays (1)`
dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='line')
dataset['train']['tokens'][0]
>>> 'Distribution\tNN\tO\tO'
```
## Expected Behaviour / Suggestion
```python
# suppose we defined 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags']
dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='paragraph', sep='\t')
dataset['train']['tokens'][0]
>>> ['Distribution', 'and', 'dynamics', ... ]
dataset['train']['ner_tags'][0]
>>> ['O', 'O', 'O', ... ]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5486/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5486/timeline | null | null | null | null | false | [
"Hi @omar-araboghli, thanks for your proposal.\r\n\r\nHave you tried to use \"csv\" loader instead of \"text\"? That already has a `sep` argument.",
"Hi @albertvillanova, thanks for the quick response!\r\n\r\nIndeed, I have been trying to use `csv` instead of `text`. However I am still not able to define range of rows as one sequence, that is achievable with passing `sample_by='paragraph'` to the `TextConfig`\r\n\r\nFor instance, the below code\r\n\r\n```python\r\nimport datasets\r\n\r\ndataset = datasets.load_dataset(\r\n path='csv',\r\n data_files={'train': TRAINING_SET_PATH},\r\n sep='\\t',\r\n header=None,\r\n column_names=['tokens', 'pos_tags', 'chunk_tags', 'ner_tags']\r\n)\r\n```\r\n\r\nleads to \r\n\r\n```python\r\ndataset\r\n>>> DatasetDict({\r\n train: Dataset({\r\n features: ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags'],\r\n num_rows: 62543\r\n })\r\n})\r\n\r\ndataset['train'][0]\r\n>>> {'tokens': 'Distribution',\r\n 'pos_tags': 'NN',\r\n 'chunk_tags': 'O',\r\n 'ner_tags': 'O'\r\n}\r\n```\r\nIs there a way to deal with multiple csv rows as one dataset instance, where each column is a sequence of those rows ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/949/comments | https://api.github.com/repos/huggingface/datasets/issues/949/events | https://github.com/huggingface/datasets/pull/949 | 754,317,777 | MDExOlB1bGxSZXF1ZXN0NTMwMjM4MTky | 949 | Add GermaNER Dataset | [] | closed | false | null | 1 | 2020-12-01T11:33:31Z | 2020-12-03T14:06:41Z | 2020-12-03T14:06:40Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/949/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/949/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/949.diff",
"html_url": "https://github.com/huggingface/datasets/pull/949",
"merged_at": "2020-12-03T14:06:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/949.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/949"
} | true | [
"@lhoestq added. "
] |
|
https://api.github.com/repos/huggingface/datasets/issues/3991 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3991/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3991/comments | https://api.github.com/repos/huggingface/datasets/issues/3991/events | https://github.com/huggingface/datasets/issues/3991 | 1,177,362,901 | I_kwDODunzps5GLSHV | 3,991 | Add Lung Image Database Consortium image collection (LIDC-IDRI) dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | open | false | null | 0 | 2022-03-22T22:16:05Z | 2022-03-23T12:57:16Z | null | null | ## Adding a Dataset
- **Name:** *Lung Image Database Consortium image collection (LIDC-IDRI)*
- **Description:** *Consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and evaluation of computer-assisted diagnostic (CAD) methods for lung cancer detection and diagnosis. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.*
- **Data:** *[link to the Github repository or current dataset location](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI)*
- **Motivation:** *Key dataset in the healthcare community*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
FYI @osanseviero @abidlabs | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3991/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3991/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1639 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1639/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1639/comments | https://api.github.com/repos/huggingface/datasets/issues/1639/events | https://github.com/huggingface/datasets/issues/1639 | 774,903,472 | MDU6SXNzdWU3NzQ5MDM0NzI= | 1,639 | bug with sst2 in glue | [] | closed | false | null | 3 | 2020-12-26T16:57:23Z | 2022-10-05T12:40:16Z | 2022-10-05T12:40:16Z | null | Hi
I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below.
Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on this dataset. thank you for your help. @lhoestq
```
>>> a = datasets.load_dataset('glue', 'sst2', split="validation", script_version="master")
Reusing dataset glue (/julia/datasets/glue/sst2/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4)
>>> a[:10]
{'idx': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9], 'label': [1, 0, 1, 1, 0, 1, 0, 0, 1, 0], 'sentence': ["it 's a charming and often affecting journey . ", 'unflinchingly bleak and desperate ', 'allows us to hope that nolan is poised to embark a major career as a commercial yet inventive filmmaker . ', "the acting , costumes , music , cinematography and sound are all astounding given the production 's austere locales . ", "it 's slow -- very , very slow . ", 'although laced with humor and a few fanciful touches , the film is a refreshingly serious look at young women . ', 'a sometimes tedious film . ', "or doing last year 's taxes with your ex-wife . ", "you do n't have to know about music to appreciate the film 's easygoing blend of comedy and romance . ", "in exactly 89 minutes , most of which passed as slowly as if i 'd been sitting naked on an igloo , formula 51 sank from quirky to jerky to utter turkey . "]}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1639/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1639/timeline | null | completed | null | null | false | [
"Maybe you can use nltk's treebank detokenizer ?\r\n```python\r\nfrom nltk.tokenize.treebank import TreebankWordDetokenizer\r\n\r\nTreebankWordDetokenizer().detokenize(\"it 's a charming and often affecting journey . \".split())\r\n# \"it's a charming and often affecting journey.\"\r\n```",
"I am looking for alternative file URL here instead of adding extra processing code: https://github.com/huggingface/datasets/blob/171f2bba9dd8b92006b13cf076a5bf31d67d3e69/datasets/glue/glue.py#L174",
"I don't know if there exists a detokenized version somewhere. Even the version on kaggle is tokenized"
] |
https://api.github.com/repos/huggingface/datasets/issues/661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/661/comments | https://api.github.com/repos/huggingface/datasets/issues/661/events | https://github.com/huggingface/datasets/pull/661 | 706,465,936 | MDExOlB1bGxSZXF1ZXN0NDkxMDA3NjEw | 661 | Replace pa.OSFile by open | [] | closed | false | null | 0 | 2020-09-22T15:05:59Z | 2021-05-05T18:24:36Z | 2020-09-22T15:15:25Z | null | It should fix #643 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/661/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/661/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/661.diff",
"html_url": "https://github.com/huggingface/datasets/pull/661",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/661.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/661"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/163/comments | https://api.github.com/repos/huggingface/datasets/issues/163/events | https://github.com/huggingface/datasets/issues/163 | 620,534,307 | MDU6SXNzdWU2MjA1MzQzMDc= | 163 | [Feature request] Add cos-e v1.0 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 10 | 2020-05-18T22:05:26Z | 2020-06-16T23:15:25Z | 2020-06-16T18:52:06Z | null | I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https://www.aclweb.org/anthology/P19-1487/), and v1.11 has noted [annotation](https://github.com/salesforce/cos-e/issues/2) [issues](https://arxiv.org/pdf/2004.14546.pdf). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/163/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/163/timeline | null | completed | null | null | false | [
"Sounds good, @mariamabarham do you want to give a look?\r\nI think we should have two configurations so we can allow either version of the dataset to be loaded with the `1.0` version being the default maybe.\r\n\r\nCc some authors of the great cos-e: @nazneenrajani @bmccann",
"cos_e v1.0 is related to CQA v1.0 but only CQA v1.11 dataset is available on their website. Indeed their is lots of ids in cos_e v1, which are not in CQA v1.11 or the other way around.\r\n@sarahwie, @thomwolf, @nazneenrajani, @bmccann do you know where I can find CQA v1.0\r\n",
"@mariamabarham I'm also not sure where to find CQA 1.0. Perhaps it's not possible to include this version of the dataset. I'll close the issue if that's the case.",
"I do have a copy of the dataset. I can upload it to our repo.",
"Great @nazneenrajani. let me know once done.\r\nThanks",
"@mariamabarham @sarahwie I added them to the cos-e repo https://github.com/salesforce/cos-e/tree/master/data/v1.0",
"You can now do\r\n```python\r\nfrom nlp import load_dataset\r\ncos_e = load_dataset(\"cos_e\", \"v1.0\")\r\n```\r\nThanks @mariamabarham !",
"Thanks!",
"@mariamabarham Just wanted to note that default behavior `cos_e = load_dataset(\"cos_e\")` now loads `v1.0`. Not sure if this is intentional (but the flag specification does work as intended). ",
"> @mariamabarham Just wanted to note that default behavior `cos_e = load_dataset(\"cos_e\")` now loads `v1.0`. Not sure if this is intentional (but the flag specification does work as intended).\r\n\r\nIn the new version of `nlp`, if you try `cos_e = load_dataset(\"cos_e\")` it throws this error:\r\n```\r\nValueError: Config name is missing.\r\nPlease pick one among the available configs: ['v1.0', 'v1.11']\r\nExample of usage:\r\n\t`load_dataset('cos_e', 'v1.0')`\r\n```\r\nFor datasets with at least two configurations, we now force the user to pick one (no default)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4179/comments | https://api.github.com/repos/huggingface/datasets/issues/4179/events | https://github.com/huggingface/datasets/issues/4179 | 1,208,001,118 | I_kwDODunzps5IAKJe | 4,179 | Dataset librispeech_asr fails to load | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 21 | 2022-04-19T08:45:48Z | 2022-07-27T16:10:00Z | 2022-07-27T16:10:00Z | null | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/librispeech_asr), it says it has two configurations (clean and other).
However, the dataset doc says that not specifying `split` should just load the whole dataset, which is what I want.
Also, in case of this specific dataset, this is also the standard what the community uses. When you look at any publications with results on Librispeech, they always use the whole train dataset for training.
## Actual results
```
...
File "/home/az/.cache/huggingface/modules/datasets_modules/datasets/librispeech_asr/1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c/librispeech_asr.py", line 119, in LibrispeechASR._split_generators
line: archive_path = dl_manager.download(_DL_URLS[self.config.name])
locals:
archive_path = <not found>
dl_manager = <local> <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160>
dl_manager.download = <local> <bound method DownloadManager.download of <datasets.utils.download_manager.DownloadManager object at 0x7fc07b426160>>
_DL_URLS = <global> {'clean': {'dev': 'http://www.openslr.org/resources/12/dev-clean.tar.gz', 'test': 'http://www.openslr.org/resources/12/test-clean.tar.gz', 'train.100': 'http://www.openslr.org/resources/12/train-clean-100.tar.gz', 'train.360': 'http://www.openslr.org/resources/12/train-clean-360.tar.gz'}, 'other'...
self = <local> <datasets_modules.datasets.librispeech_asr.1f4602f6b5fed8d3ab3e3382783173f2e12d9877e98775e34d7780881175096c.librispeech_asr.LibrispeechASR object at 0x7fc12a633310>
self.config = <local> BuilderConfig(name='default', version=0.0.0, data_dir='/home/az/i6/setups/2022-03-20--sis/work/i6_core/datasets/huggingface/DownloadAndPrepareHuggingFaceDatasetJob.TV6Nwm6dFReF/output/data_dir', data_files=None, description=None)
self.config.name = <local> 'default', len = 7
KeyError: 'default'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.0
- Platform: Linux-5.4.0-107-generic-x86_64-with-glibc2.31
- Python version: 3.9.9
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4179/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4179/timeline | null | completed | null | null | false | [
"@patrickvonplaten Hi! I saw that you prepared this? :)",
"Another thing, but maybe this should be a separate issue: As I see from the code, it would try to use up to 16 simultaneous downloads? This is problematic for Librispeech or anything on OpenSLR. On [the homepage](https://www.openslr.org/), it says:\r\n\r\n> If you want to download things from this site, please download them one at a time, and please don't use any fancy software-- just download things from your browser or use 'wget'. We have a firewall rule to drop connections from hosts with more than 5 simultaneous connections, and certain types of download software may activate this rule.\r\n\r\nRelated: https://github.com/tensorflow/datasets/issues/3885",
"Hey @albertz,\r\n\r\nNice to see you here! It's been a while ;-) ",
"Sorry maybe the docs haven't been super clear here. By `split` we mean one of `train.500`, `train.360`, `train.100`, `validation`, `test`. For Librispeech, you'll have to specific a config (either `other` or `clean`) though:\r\n\r\n```py\r\ndatasets.load_dataset(\"librispeech_asr\", \"clean\")\r\n```\r\n\r\nshould work and give you all splits (being \"train\", \"test\", ...) for the clean config of the dataset.\r\n",
"If you need both `\"clean\"` and `\"other\"` I think you'll have to do concatenate them as follows: \r\n\r\n```py\r\nfrom datasets import concatenate_datasets, load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\")\r\nclean = load_dataset(\"librispeech_asr\", \"clean\")\r\n\r\nlibrispeech = concatenate_datasets([other, clean])\r\n```\r\n\r\nSee https://huggingface.co/docs/datasets/v2.1.0/en/process#concatenate",
"Downloading one split would be:\r\n\r\n```py\r\nfrom datasets import load_dataset\r\n\r\nother = load_dataset(\"librispeech_asr\", \"other\", split=\"train.500\")\r\n```\r\n\r\n\r\n",
"cc @lhoestq FYI maybe the docs can be improved here",
"Ah thanks. But wouldn't it be easier/nicer (and more canonical) to just make it in a way that simply `load_dataset(\"librispeech_asr\")` works?",
"Pinging @lhoestq here, think this could make sense! Not sure however how the dictionary would then look like",
"Would it make sense to have `clean` as the default config ?\r\n\r\nAlso I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nI also opened a PR to improve the doc: https://github.com/huggingface/datasets/pull/4183",
"> Would it make sense to have `clean` as the default config ?\r\n\r\nI think a user would expect that the default would give you the full dataset.\r\n\r\n> Also I think `load_dataset(\"librispeech_asr\")` should have raised you an error that says that you need to specify a config\r\n\r\nIt does raise an error, but this error confused me because I did not understand why I needed a config, or why I could not simply download the whole dataset, which is what people usually do with Librispeech.\r\n",
"+1 for @albertz. Also think lots of people download the whole dataset (`\"clean\"` + `\"other\"`) for Librispeech.\r\n\r\nThink there are also some people though who:\r\n- a) Don't have the memory to store the whole dataset\r\n- b) Just want to evaluate on one of the two configs",
"Ok ! Adding the \"all\" configuration would do the job then, thanks ! In the \"all\" configuration we can merge all the train.xxx splits into one \"train\" split, or keep them separate depending on what's the most practical to use (probably put everything in \"train\" no ?)",
"I'm not too familiar with how to work with HuggingFace datasets, but people often do some curriculum learning scheme, where they start with train.100, later go over to train.100 + train.360, and then later use the whole train (960h). It would be good if this is easily possible.\r\n",
"Hey @albertz, \r\n\r\nopened a PR here. Think by adding the \"subdataset\" class to each split \"train\", \"dev\", \"other\" as shown here: https://github.com/huggingface/datasets/pull/4184/files#r853272727 it should be easily possible (e.g. with the filter function https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/main_classes#datasets.Dataset.filter )",
"But also since everything is cached one could also just do:\r\n\r\n```python\r\nload_dataset(\"librispeech\", \"clean\", \"train.100\")\r\nload_dataset(\"librispeech\", \"clean\", \"train.100+train.360\")\r\nload_dataset(\"librispeech\" \"all\", \"train\") \r\n```",
"Hi @patrickvonplaten ,\r\n\r\nload_dataset(\"librispeech_asr\", \"clean\", \"train.100\") actually downloads the whole dataset and not the 100 hr split, is this a bug?",
"Hmm, I don't really see how that's possible: https://github.com/huggingface/datasets/blob/d22e39a0693d4be7410cf9a5d41fd5aac22be3cc/datasets/librispeech_asr/librispeech_asr.py#L51\r\n\r\nNote that all datasets related to `\"clean\"` are downloaded, but only `\"train.100\"` should be used. \r\n\r\ncc @lhoestq @albertvillanova @mariosasko can we do anything against download dataset links that are not related to the \"split\" that one actually needs. E.g. why should the split `\"train.360\"` be downloaded if for the user executes the above command:\r\n\r\n```py\r\nload_dataset(\"librispeech_asr\", \"clean\", \"train.100\")\r\n```",
"@patrickvonplaten This problem is a bit harder than it may seem, and it has to do with how our scripts are structured - `_split_generators` downloads data for a split before its definition. There was an attempt to fix this in https://github.com/huggingface/datasets/pull/2249, but it wasn't flexible enough. Luckily, I have a plan of attack, and this issue is on our short-term roadmap, so I'll work on it soon.\r\n\r\nIn the meantime, one can use streaming or manually download a dataset script, remove unwanted splits and load a dataset via `load_dataset`.",
"> load_dataset(\"librispeech_asr\", \"clean\", \"train.100\") actually downloads the whole dataset and not the 100 hr split, is this a bug?\r\n\r\nSince this bug is still there and google led me here when I was searching for a solution, I am writing down how to quickly fix it (as suggested by @mariosasko) for whoever else is not familiar with how the HF Hub works.\r\n\r\nDownload the [librispeech_asr.py](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py) script and remove the unwanted splits both from the [`_DL_URLS` dictionary](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py#L47-L68) and from the [`_split_generators` function](https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py#L121-L241).\r\n[Here ](https://huggingface.co/datasets/andreagasparini/librispeech_test_only) I made an example with only the test sets.\r\n\r\nThen either save the script locally and load the dataset via \r\n```python\r\nload_dataset(\"${local_path}/librispeech_asr.py\")\r\n```\r\n\r\nor [create a new dataset repo on the hub](https://huggingface.co/new-dataset) named \"librispeech_asr\" and upload the script there, then you can just run\r\n```python\r\nload_dataset(\"${hugging_face_username}/librispeech_asr\")\r\n```",
"Fixed by https://github.com/huggingface/datasets/pull/4184"
] |
https://api.github.com/repos/huggingface/datasets/issues/331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/331/comments | https://api.github.com/repos/huggingface/datasets/issues/331/events | https://github.com/huggingface/datasets/issues/331 | 648,533,199 | MDU6SXNzdWU2NDg1MzMxOTk= | 331 | Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError` | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 5 | 2020-06-30T22:21:33Z | 2020-07-09T13:03:40Z | 2020-07-09T13:03:40Z | null | ```
>>> import nlp
>>> nlp.load_dataset('cnn_dailymail', '3.0.0')
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py", line 520, in load_dataset
builder_instance.download_and_prepare(
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 431, in download_and_prepare
self._download_and_prepare(
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 488, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/331/timeline | null | completed | null | null | false | [
"I couldn't reproduce on my side.\r\nIt looks like you were not able to generate all the examples, and you have the problem for each split train-test-validation.\r\nCould you try to enable logging, try again and send the logs ?\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n```",
"here's the log\r\n```\r\n>>> import nlp\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\nnlp.load_dataset('cnn_dailymail', '3.0.0')\r\n>>> import logging\r\n>>> logging.basicConfig(level=logging.INFO)\r\n>>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\nINFO:nlp.load:Checking /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py for additional imports.\r\nINFO:filelock:Lock 140443095301136 acquired on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\nINFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail\r\nINFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\nINFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.py\r\nINFO:nlp.load:Updating dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/dataset_infos.json to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/dataset_infos.json\r\nINFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.json\r\nINFO:filelock:Lock 140443095301136 released on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\nINFO:nlp.info:Loading Dataset Infos from /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\nINFO:nlp.builder:Generating dataset cnn_dailymail (/u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0)\r\nINFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\nINFO:nlp.utils.info_utils:All the checksums matched successfully.\r\nINFO:nlp.builder:Generating split train\r\nINFO:nlp.arrow_writer:Done writing 285161 examples in 1240618482 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-train.arrow.\r\nINFO:nlp.builder:Generating split validation\r\nINFO:nlp.arrow_writer:Done writing 13255 examples in 56637485 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-validation.arrow.\r\nINFO:nlp.builder:Generating split test\r\nINFO:nlp.arrow_writer:Done writing 11379 examples in 48931393 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-test.arrow.\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 431, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 488, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py\", line 70, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\nnlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n```",
"> here's the log\r\n> \r\n> ```\r\n> >>> import nlp\r\n> import logging\r\n> logging.basicConfig(level=logging.INFO)\r\n> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n> >>> import logging\r\n> >>> logging.basicConfig(level=logging.INFO)\r\n> >>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n> INFO:nlp.load:Checking /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py for additional imports.\r\n> INFO:filelock:Lock 140443095301136 acquired on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\n> INFO:nlp.load:Found main folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail\r\n> INFO:nlp.load:Found specific version folder for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\n> INFO:nlp.load:Found script file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.py\r\n> INFO:nlp.load:Updating dataset infos file from https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/dataset_infos.json to /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/dataset_infos.json\r\n> INFO:nlp.load:Found metadata file for dataset https://s3.amazonaws.com/datasets.huggingface.co/nlp/datasets/cnn_dailymail/cnn_dailymail.py at /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad/cnn_dailymail.json\r\n> INFO:filelock:Lock 140443095301136 released on /u/jm8wx/.cache/huggingface/datasets/720d2e20d8dc6d98f21195a39cc934bb41dd0a40b57ea3d323661a7c5d70522c.d44c2417f4e0fe938ede0a684dcbb1fa9b4789de22e8a99c43103d4b4c374b3b.py.lock\r\n> INFO:nlp.info:Loading Dataset Infos from /p/qdata/jm8wx/datasets/nlp/src/nlp/datasets/cnn_dailymail/9645e0bc96f647decf46541f6f4bef6936ee82ace653ac362bab03309a46d4ad\r\n> INFO:nlp.builder:Generating dataset cnn_dailymail (/u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0)\r\n> INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source\r\n> Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\n> INFO:nlp.utils.info_utils:All the checksums matched successfully.\r\n> INFO:nlp.builder:Generating split train\r\n> INFO:nlp.arrow_writer:Done writing 285161 examples in 1240618482 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-train.arrow.\r\n> INFO:nlp.builder:Generating split validation\r\n> INFO:nlp.arrow_writer:Done writing 13255 examples in 56637485 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-validation.arrow.\r\n> INFO:nlp.builder:Generating split test\r\n> INFO:nlp.arrow_writer:Done writing 11379 examples in 48931393 bytes /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0.incomplete/cnn_dailymail-test.arrow.\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py\", line 520, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 431, in download_and_prepare\r\n> self._download_and_prepare(\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py\", line 488, in _download_and_prepare\r\n> verify_splits(self.info.splits, split_dict)\r\n> File \"/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py\", line 70, in verify_splits\r\n> raise NonMatchingSplitsSizesError(str(bad_splits))\r\n> nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}]\r\n> ```\r\n\r\nWith `nlp == 0.3.0` version, I'm not able to reproduce this error on my side.\r\nWhich version are you using for reproducing your bug?\r\n\r\n```\r\n>> nlp.load_dataset('cnn_dailymail', '3.0.0')\r\n\r\n8.90k/8.90k [00:18<00:00, 486B/s]\r\n\r\nDownloading: 100%\r\n9.37k/9.37k [00:00<00:00, 234kB/s]\r\n\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...\r\nDownloading:\r\n159M/? [00:09<00:00, 16.7MB/s]\r\n\r\nDownloading:\r\n376M/? [00:06<00:00, 62.6MB/s]\r\n\r\nDownloading:\r\n2.11M/? [00:06<00:00, 333kB/s]\r\n\r\nDownloading:\r\n46.4M/? [00:02<00:00, 18.4MB/s]\r\n\r\nDownloading:\r\n2.43M/? [00:00<00:00, 2.62MB/s]\r\n\r\nDataset cnn_dailymail downloaded and prepared to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0. Subsequent calls will reuse this data.\r\n{'test': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 11490),\r\n 'train': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 287113),\r\n 'validation': Dataset(schema: {'article': 'string', 'highlights': 'string'}, num_rows: 13368)}\r\n\r\n>> ...\r\n\r\n```",
"In general if some examples are missing after processing (hence causing the `NonMatchingSplitsSizesError `), it is often due to either\r\n1) corrupted cached files\r\n2) decoding errors\r\n\r\nI just checked the dataset script for code that could lead to decoding errors but I couldn't find any. Before we try to dive more into the processing of the dataset, could you try to clear your cache ? Just to make sure that it isn't 1)",
"Yes thanks for the support! I cleared out my cache folder and everything works fine now"
] |
https://api.github.com/repos/huggingface/datasets/issues/2787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2787/comments | https://api.github.com/repos/huggingface/datasets/issues/2787/events | https://github.com/huggingface/datasets/issues/2787 | 967,018,406 | MDU6SXNzdWU5NjcwMTg0MDY= | 2,787 | ConnectionError: Couldn't reach https://raw.githubusercontent.com | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 7 | 2021-08-11T16:19:01Z | 2021-11-24T06:25:38Z | 2021-08-18T15:09:18Z | null | Hello,
I am trying to run run_glue.py and it gives me this error -
Traceback (most recent call last):
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module>
main()
File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 250, in main
datasets = load_dataset("glue", data_args.task_name, cache_dir=model_args.cache_dir)
File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 718, in load_dataset
use_auth_token=use_auth_token,
File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 320, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 623, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py
Trying to do python run_glue.py --model_name_or_path
bert-base-cased
--task_name
mrpc
--do_train
--do_eval
--max_seq_length
128
--per_device_train_batch_size
32
--learning_rate
2e-5
--num_train_epochs
3
--output_dir
./tmp/mrpc/
Is this something on my end? From what I can tell, this was re-fixeded by @fullyz a few months ago.
Thank you!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2787/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2787/timeline | null | completed | null | null | false | [
"the bug code locate in :\r\n if data_args.task_name is not None:\r\n # Downloading and loading a dataset from the hub.\r\n datasets = load_dataset(\"glue\", data_args.task_name, cache_dir=model_args.cache_dir)",
"Hi @jinec,\r\n\r\nFrom time to time we get this kind of `ConnectionError` coming from the github.com website: https://raw.githubusercontent.com\r\n\r\nNormally, it should work if you wait a little and then retry.\r\n\r\nCould you please confirm if the problem persists?",
"cannot connect,even by Web browser,please check that there is some problems。",
"I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...",
"> I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem...\r\n\r\nI can not access https://raw.githubusercontent.com/huggingface/datasets either, I am in China",
"Finally i can access it, by the superfast software. Thanks",
"> Finally i can access it, by the superfast software. Thanks\r\n\r\nExcuse me, I have the same problem as you, could you please tell me how to solve it?"
] |
https://api.github.com/repos/huggingface/datasets/issues/5107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5107/comments | https://api.github.com/repos/huggingface/datasets/issues/5107/events | https://github.com/huggingface/datasets/pull/5107 | 1,406,736,710 | PR_kwDODunzps5ArjCZ | 5,107 | Multiprocessed dataset builder | [] | closed | false | null | 17 | 2022-10-12T19:59:17Z | 2022-12-01T15:37:09Z | 2022-11-09T17:11:43Z | null | This PR adds the multiprocessing part of #2650 (but not the caching of already-computed arrow files). On the other side, loading of sharded arrow files still needs to be implemented (sharded parquet files can already be loaded). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5107/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5107/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5107.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5107",
"merged_at": "2022-11-09T17:11:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5107.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5107"
} | true | [
"I would also like to add a test, but am not sure whether it should go into `test_builder` (more natural imo) or `test_load` (which already contains a lot of the things I have to import to run my current testing setup). For reference, what I run to test that it works looks like:\r\n\r\n```\r\nimport os\r\nfrom pathlib import Path\r\nimport shutil\r\n\r\nimport datasets\r\nfrom datasets.builder import DatasetBuilder\r\nfrom datasets.features import Features, Value\r\n\r\nDATASET_LOADING_SCRIPT_NAME = \"__dummy_dataset1__\"\r\n\r\nDATASET_LOADING_SCRIPT_CODE = \"\"\"\r\nimport os\r\n\r\nimport datasets\r\nfrom datasets import DatasetInfo, Features, Split, SplitGenerator, Value\r\n\r\n\r\nclass __DummyDataset1__(datasets.GeneratorBasedBuilder):\r\n\r\n def _info(self) -> DatasetInfo:\r\n return DatasetInfo(features=Features({\"text\": Value(\"string\")}))\r\n\r\n def _split_generators(self, dl_manager):\r\n return [\r\n SplitGenerator(Split.TRAIN, gen_kwargs={\"filepaths\": [os.path.join(dl_manager.manual_dir, \"train1.txt\"), os.path.join(dl_manager.manual_dir, \"train2.txt\")]}),\r\n SplitGenerator(Split.TEST, gen_kwargs={\"filepaths\": [os.path.join(dl_manager.manual_dir, \"test.txt\")]}),\r\n ]\r\n\r\n def _generate_examples(self, filepaths, **kwargs):\r\n idx = 0\r\n for filepath in filepaths:\r\n with open(filepath, \"r\", encoding=\"utf-8\") as f:\r\n for line in f:\r\n yield idx, {\"text\": line.strip()}\r\n idx += 1\r\n\"\"\"\r\n\r\n\r\ndef dataset_loading_script_dir(tmp_path):\r\n script_name = DATASET_LOADING_SCRIPT_NAME\r\n script_dir = tmp_path / script_name\r\n script_dir.mkdir()\r\n script_path = script_dir / f\"{script_name}.py\"\r\n with open(script_path, \"w\") as f:\r\n f.write(DATASET_LOADING_SCRIPT_CODE)\r\n return str(script_dir)\r\n\r\n\r\ndef data_dir(tmp_path):\r\n data_dir = tmp_path / \"data_dir\"\r\n data_dir.mkdir()\r\n with open(data_dir / \"train1.txt\", \"w\") as f:\r\n f.write(\"foo\\n\" * 10)\r\n with open(data_dir / \"train2.txt\", \"w\") as f:\r\n f.write(\"foo\\n\" * 10)\r\n with open(data_dir / \"test.txt\", \"w\") as f:\r\n f.write(\"bar\\n\" * 10)\r\n return str(data_dir)\r\n\r\n\r\ndef load_dataset_builder_multiprocessed(tmp_path):\r\n builder = datasets.load_dataset_builder(\r\n os.path.join(dataset_loading_script_dir(tmp_path), DATASET_LOADING_SCRIPT_NAME + \".py\"),\r\n data_dir=data_dir(tmp_path),\r\n )\r\n assert isinstance(builder, DatasetBuilder)\r\n assert builder.name == DATASET_LOADING_SCRIPT_NAME\r\n assert builder.info.features == Features({\"text\": Value(\"string\")})\r\n builder.download_and_prepare(tmp_path / \"prepare_target\", max_shard_size=500, num_proc=2)\r\n\r\nif __name__ == \"__main__\":\r\n tmp_path = \"tmp\"\r\n if os.path.exists(tmp_path):\r\n raise FileExistsError(f\"path {tmp_path} already exists\")\r\n os.makedirs(tmp_path)\r\n try:\r\n load_dataset_builder_multiprocessed(Path(tmp_path))\r\n finally:\r\n # pass\r\n shutil.rmtree(tmp_path)\r\n```",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5107). All of your documentation changes will be reflected on that endpoint.",
"Nice ! I think the test can go in `test_builder.py` :)",
"I've added sharded arrow dataset loading. Two WIP items in the PR:\r\n- ~~Order is not conserved (it seems like the sharded files are read in the wrong order)~~\r\n- the tqdm for preparing the splits is wrong (it compares against the size of the whole split rather than against the size of the multiprocessing shard, but I am not sure how to access the latter)\r\n\r\nAlso `naming.filenames_for_dataset_split` is not very elegant imo.\r\n\r\n@lvwerra if you don't care about order, as I do, it's functional for now but I'd still quite like to get to the bottom of this.",
"Found the ordering bug ! (`glob.glob` returning stuff in arbitrary order)",
"I fixed the tqdm to be less misleading, but it can't tell where to stop. I am a bit hesitant to add a top-level tqdm (on the shard iterator) since for most intents it will do 0 -> N shards straight, but I am not sure what is the best way to present that info here.",
"I'm continuing the PR :)",
"Did a few changes:\r\n- make shards naming consistent:\r\n - use `{builder_name}-{split_name}.{file_format}` when there's only 1 shard\r\n - otherwise use `{builder_name}-{split_name}-{shard_idx:05d}-of-{num_shards:05d}.{file_format}`\r\n- update the reader to support reading several shards\r\n - added a new `shard_lengths` field in `SplitInfo` (FYI it is saved in `dataset_info.json` next to the shards as usual)\r\n - it's None when there's only 1 shard\r\n - otherwise it's a list of integers that correspond to the number of rows per shard\r\n - implemented partial reading to only memory map the required shards\r\n - e.g. when someone asks for a partial split like `train[:10%]`\r\n- align the sharding for beam datasets\r\n - no more combining into 1 big arrow file\r\n- added a tqdm bar\r\n - only one single bar, handled by the main process\r\n - gathers progress updates from other processes using `iflatmap_unordered`\r\n - shows the number of examples (even for datasets prepared by generating arrow tables)\r\n- disabled multiprocessing by default - users must pass `num_proc` explicitly\r\n- tests\r\n- docs",
"Alright this is ready for review - sorry it ended up so big ^^'\r\n\r\nIf I can do anything to make it easier for your to review this PR @mariosasko let me know",
"Multiprocessing is disabled by default but we may show a warning to encourage users to pass `num_proc` if the dataset is split in many files. Let me know what you think",
"Hey, is this error seems to you guys natural? \r\n\r\nThe package built from `0d4e3907` commit tag, and here is the version displayed from the import ... \r\n```bash\r\n>>> datasets.__version__\r\n'2.6.1.dev0'\r\n>>> \r\n```\r\n\r\n```bash\r\n>>> data = load_dataset('dataset_loaders/rfw2latentplay', num_proc=14)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/load.py\", line 1719, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/load.py\", line 1523, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/builder.py\", line 1292, in __init__\r\n super().__init__(*args, **kwargs)\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/builder.py\", line 303, in __init__\r\n self.config, self.config_id = self._create_builder_config(\r\n File \"/somewhere//mambaforge/envs/datasets/lib/python3.8/site-packages/datasets/builder.py\", line 456, in _create_builder_config\r\n builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs)\r\nTypeError: __init__() got an unexpected keyword argument 'num_proc'\r\n```\r\n\r\nLet me know if I can help fixing this ... \r\n",
"> Do we have some benchmarks to see the speed-up?\r\n\r\nOn my machine running `load_dataset(\"oscar-corpus/OSCAR-2201\", \"br\")` (which is split in shards) I go from 2-3k examples per sec to 4-5k examples per sec with num_proc=2 😉",
"> Hey, is this error seems to you guys natural?\r\n>\r\n> The package built from 0d4e3907 commit tag, and here is the version displayed from the import ...\r\n\r\nI don't know where you got the `0d4e3907` commit tag from, it doesn't seem to be in this PR. You should try installing from this PR, or wait for it to be merged on `main`",
"## Splits vs Shards\r\n\r\nMaybe it's a good idea to add some documentation on the `sharding` that can be achieved by passing `list` based arguments to the `SplitGenerator`s `gen_kwargs` ... \r\n\r\nI had to read the whole dataset generation source code to find this out ... \r\n\r\n\r\n",
"> Maybe it's a good idea to add some documentation on the sharding that can be achieved by passing list based arguments to the SplitGenerators gen_kwargs ...\r\n\r\nThis is part of this PR :) you can check the changes in docs/source/dataset_script.mdx",
"I took your comments into account @mariosasko thanks !\r\nLet me know if it's good for you now ;)",
"The doc CI should be fixed by now hopefully, merging !"
] |
https://api.github.com/repos/huggingface/datasets/issues/3493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3493/comments | https://api.github.com/repos/huggingface/datasets/issues/3493/events | https://github.com/huggingface/datasets/pull/3493 | 1,089,967,286 | PR_kwDODunzps4wVxfr | 3,493 | Fix VCTK encoding | [] | closed | false | null | 0 | 2021-12-28T15:23:36Z | 2021-12-28T15:48:18Z | 2021-12-28T15:48:17Z | null | utf-8 encoding was missing in the VCTK dataset builder added in #3351 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3493/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3493/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3493.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3493",
"merged_at": "2021-12-28T15:48:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3493.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3493"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1893/comments | https://api.github.com/repos/huggingface/datasets/issues/1893/events | https://github.com/huggingface/datasets/issues/1893 | 809,556,503 | MDU6SXNzdWU4MDk1NTY1MDM= | 1,893 | wmt19 is broken | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 2 | 2021-02-16T18:39:58Z | 2021-03-03T17:42:02Z | 2021-03-03T17:42:02Z | null | 1. Check which lang pairs we have: `--dataset_name wmt19`:
Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
2. OK, let's pick `ru-en`:
`--dataset_name wmt19 --dataset_config "ru-en"`
no cookies:
```
Traceback (most recent call last):
File "./run_seq2seq.py", line 661, in <module>
main()
File "./run_seq2seq.py", line 317, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 572, in download_and_prepare
self._download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 628, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt19/436092de5f3faaf0fc28bc84875475b384e90a5470fa6afaee11039ceddc5052/wmt_utils.py", line 755, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 276, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 191, in download
downloaded_path_or_paths = map_nested(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 233, in map_nested
mapped = [
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 234, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 172, in _single_map_nested
return function(data_struct)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 211, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1893/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1893/timeline | null | completed | null | null | false | [
"This was also mentioned in https://github.com/huggingface/datasets/issues/488 \r\n\r\nThe bucket where is data was stored seems to be unavailable now. Maybe we can change the URL to the ones in https://conferences.unite.un.org/uncorpus/en/downloadoverview ?",
"Closing since this has been fixed by #1912"
] |
https://api.github.com/repos/huggingface/datasets/issues/5214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5214/comments | https://api.github.com/repos/huggingface/datasets/issues/5214/events | https://github.com/huggingface/datasets/pull/5214 | 1,440,334,978 | PR_kwDODunzps5CbmWE | 5,214 | Update github pr docs actions | [] | closed | false | null | 1 | 2022-11-08T14:43:37Z | 2022-11-08T15:39:58Z | 2022-11-08T15:39:57Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5214/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5214.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5214",
"merged_at": "2022-11-08T15:39:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5214.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5214"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5214). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/2980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2980/comments | https://api.github.com/repos/huggingface/datasets/issues/2980/events | https://github.com/huggingface/datasets/issues/2980 | 1,009,873,482 | I_kwDODunzps48MXJK | 2,980 | OpenSLR 25: ASR data for Amharic, Swahili and Wolof | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 3 | 2021-09-28T15:04:36Z | 2021-09-29T17:25:14Z | null | null | ## Adding a Dataset
- **Name:** *SLR25*
- **Description:** *Subset 25 from OpenSLR. Other subsets have been added to https://huggingface.co/datasets/openslr, 25 covers Amharic, Swahili and Wolof data*
- **Paper:** *https://www.openslr.org/25/ has citations for each of the three subsubsets. *
- **Data:** *Currently the three links to the .tar.bz2 files can be found a thttps://www.openslr.org/25/*
- **Motivation:** *Increase ASR data for underrepresented African languages. Also, other subsets of OpenSLR speech recognition have been uploaded, so this would be easy.*
https://github.com/huggingface/datasets/blob/master/datasets/openslr/openslr.py already has been created for various other OpenSLR subsets, this should be relatively straightforward to do.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2980/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2980/timeline | null | null | null | null | false | [
"Whoever handles this just needs to: \r\n\r\n- [ ] fork the HuggingFace Datasets repo\r\n- [ ] update the [existing dataset script](https://github.com/huggingface/datasets/blob/master/datasets/openslr/openslr.py) to add SLR25. Lots of copypasting from other sections of the script should make that easy. \r\nAmharic URL: https://www.openslr.org/resources/25/data_readspeech_am.tar.bz2. \r\nSwahili URL: https://www.openslr.org/resources/25/data_broadcastnews_sw.tar.bz2, \r\nWolof URL: https://www.openslr.org/resources/25/data_readspeech_wo.tar.bz2\r\n- [ ] update the [data card](https://github.com/huggingface/datasets/blob/master/datasets/openslr/README.md) to include information about SLR25. There's lots of other examples to draw from. \r\n- [ ] add the appropriate language tags to the data card as well. https://www.w3.org/International/questions/qa-choosing-language-tags, or just use `sw`, `am`, and `wo` for consistency. \r\n- [ ] make a pull request to merge your changes back into HuggingFace's repo",
"... also the example in \"use in datasets library\" should be updated. It currently says \r\n\r\nBut you actually have to specify a subset, e.g. \r\n```python\r\ndataset = load_dataset(\"openslr\", \"SLR32\")\r\n```",
"\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1067/comments | https://api.github.com/repos/huggingface/datasets/issues/1067/events | https://github.com/huggingface/datasets/pull/1067 | 756,414,212 | MDExOlB1bGxSZXF1ZXN0NTMxOTYyNDYx | 1,067 | add xquad-r dataset | [] | closed | false | null | 0 | 2020-12-03T17:50:01Z | 2020-12-03T17:53:21Z | 2020-12-03T17:53:15Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1067/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1067/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1067.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1067",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1067.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1067"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/4139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4139/comments | https://api.github.com/repos/huggingface/datasets/issues/4139/events | https://github.com/huggingface/datasets/issues/4139 | 1,199,443,822 | I_kwDODunzps5Hfg9u | 4,139 | Dataset viewer issue for Winoground | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
},
{
"color": "51F745",
"default": false,
"description": "",
"id": 4030248571,
"name": "dataset-viewer-gated",
"node_id": "LA_kwDODunzps7wOLZ7",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer-gated"
}
] | closed | false | null | 11 | 2022-04-11T06:11:41Z | 2022-06-21T16:43:58Z | 2022-06-21T16:43:58Z | null | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files from the interface, so I assume I'm granted to access it. I'd assume the permission somehow doesn't propagate to the dataset viewer tool.
Am I the one who added this dataset ? No
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4139/timeline | null | completed | null | null | false | [
"related (same dataset): https://github.com/huggingface/datasets/issues/4149. But the issue is different. Looking at it",
"I thought this issue was related to the error I was seeing, but upon consideration I'd think the dataset viewer would return a 500 (unable to create the split like me) or a 404 (unable to load split b/c it was never created) error if it was having the issue I was seeing in #4149. 401 message makes it look like dataset viewer isn't passing through the identity of the user who has signed the licensing agreement when making the request to GET [examples.jsonl](https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl).",
"Pinging @SBrandeis, as it seems related to gated datasets and access tokens.",
"To replicate:\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset= datasets.load_dataset('facebook/winoground', name='facebook--winoground', split='train', use_auth_token=\"hf_app_...\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 439, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py\", line 85, in _generate_tables\r\n for file_idx, file in enumerate(files):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 679, in __iter__\r\n yield from self.generator(*self.args, **self.kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 731, in _iter_from_urlpaths\r\n for dirpath, _, filenames in xwalk(urlpath, use_auth_token=use_auth_token):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 623, in xwalk\r\n for dirpath, dirnames, filenames in fs.walk(main_hop):\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/spec.py\", line 372, in walk\r\n listing = self.ls(path, detail=True, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 85, in wrapper\r\n return sync(self.loop, func, *args, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 65, in sync\r\n raise return_result\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/asyn.py\", line 25, in _runner\r\n result[0] = await coro\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 196, in _ls\r\n out = await self._ls_real(url, detail=detail, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 150, in _ls_real\r\n self._raise_not_found_for_status(r, url)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/fsspec/implementations/http.py\", line 208, in _raise_not_found_for_status\r\n response.raise_for_status()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/aiohttp/client_reqrep.py\", line 1004, in raise_for_status\r\n raise ClientResponseError(\r\naiohttp.client_exceptions.ClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/facebook/winoground/resolve/a86a60456fbbd242e9a744199071a6bd3e7fd9de/examples.jsonl')\r\n```\r\n\r\n*edited to fix `use_token` -> `use_auth_token`, thx @odellus*",
"~~Using your command to replicate and changing `use_token` to `use_auth_token` fixes the problem I was seeing in #4149.~~\r\nNevermind it gave me an iterator to a method returning the same 401s. Changing `use_token` to `use_auth_token` does not fix the issue.",
"After investigation with @severo , we found a potential culprit: https://github.com/huggingface/datasets/blob/3cd0a009a43f9f174056d70bfa2ca32216181926/src/datasets/utils/streaming_download_manager.py#L610-L624\r\n\r\nThe streaming manager does not seem to pass `use_auth_token` to `fsspec` when streaming and not iterating content of a zip archive\r\n\r\ncc @albertvillanova @lhoestq ",
"I was able to reproduce it on a private dataset, let me work on a fix",
"Hey @lhoestq, Thanks for working on a fix! Any plans to merge #4173 into master? ",
"Thanks for the heads up, I still need to fix some tests that are failing in the CI before merging ;)",
"The fix has been merged, we'll do a new release soon, and update the dataset viewer",
"Fixed, thanks!\r\n<img width=\"1119\" alt=\"Capture d’écran 2022-06-21 à 18 41 09\" src=\"https://user-images.githubusercontent.com/1676121/174853571-afb0749c-4178-4c89-ab40-bb162a449788.png\">\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2713/comments | https://api.github.com/repos/huggingface/datasets/issues/2713/events | https://github.com/huggingface/datasets/pull/2713 | 952,515,256 | MDExOlB1bGxSZXF1ZXN0Njk2Njk3MzU0 | 2,713 | Enumerate all ner_tags values in WNUT 17 dataset | [] | closed | false | null | 0 | 2021-07-26T05:22:16Z | 2021-07-26T09:30:55Z | 2021-07-26T09:30:55Z | null | This PR does:
- Enumerate all ner_tags in dataset card Data Fields section
- Add all metadata tags to dataset card
Close #2709. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2713/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2713/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2713.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2713",
"merged_at": "2021-07-26T09:30:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2713.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2713"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4502/comments | https://api.github.com/repos/huggingface/datasets/issues/4502/events | https://github.com/huggingface/datasets/issues/4502 | 1,272,353,700 | I_kwDODunzps5L1pOk | 4,502 | Logic bug in arrow_writer? | [] | closed | false | null | 10 | 2022-06-15T14:50:00Z | 2022-06-18T15:15:51Z | 2022-06-18T15:15:51Z | null | https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488
I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows:
```
- if batch_examples and len(next(iter(batch_examples.values()))) == 0:
+ if not batch_examples or len(next(iter(batch_examples.values()))) == 0:
return
```
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4502/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4502/timeline | null | completed | null | null | false | [
"Hi @cccntu you're right, as when `batch_examples={}` the current if-statement won't be triggered as the condition won't be satisfied, I'll prepare a PR to address it as well as add the regression tests so that this issue is handled properly.",
"Hi @alvarobartt ,\r\nThanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.",
"> Hi @alvarobartt , Thanks for answering. Do you know when and why an empty batch is passed to this function? This only happened to me when processing with multiple workers, while chunking examples, I think.\r\n\r\nSo it depends on how you're actually chunking the data as if you're not handling empty chunks `batch_examples={}` or `batch_examples=None`, you may end up running into this issue. So you could check the chunks before you actually call `ArrowWriter.write_batch`, but anyway the fix you proposed I think improves the logic of `write_batch` to avoid running into these issues.",
"Thanks, I added a if-print and I found it does return an empty examples in the chunking function that is passed to `.map()`.",
"Hi ! We consider an empty batch to look like this:\r\n```python\r\nempty_batch = {\r\n \"column_1\": [],\r\n \"column_2\": [],\r\n ...\r\n}\r\n```\r\n\r\nWhile `{}` corresponds to a batch with no columns.\r\n\r\nTherefore calling this code should fail, because the two batches don't have the same columns:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({})\r\n```\r\n\r\nIf you want to write an empty batch, you should do this instead:\r\n```python\r\nwriter.write_batch({\"a\": [1, 2, 3]})\r\nwriter.write_batch({\"a\": []})\r\n```",
"Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using `if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...`?\r\n\r\nUpdating the regressions tests with an empty batch formatted as `{\"col_1\": [], \"col_2\": []}` instead of `{}` works fine with the current if, and also with the one proposed by @cccntu.",
"> Makes sense, then the if-statement should remain the same or is it better to handle both cases separately using if not batch_examples or len(next(iter(batch_examples.values()))) == 0: ...?\r\n\r\nThere's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for `{}` here\r\n\r\nIn particular the check `if not batch_examples or len(next(iter(batch_examples.values()))) == 0:` doesn't raise an error while it should, that why the old `if` is fine IMO\r\n\r\n> Updating the regressions tests with an empty batch formatted as {\"col_1\": [], \"col_2\": []} instead of {} works fine with the current if, and also with the one proposed by @cccntu.\r\n\r\nCool ! If you want you can update your PR to add the regression tests, to make sure that `{\"col_1\": [], \"col_2\": []}` works but not `{}`",
"Great thanks for the response! So I'll just add that regression test and remove the current if-statement.",
"Hi @lhoestq ,\r\n\r\nThanks for your explanation. Now I get it that `{}` means the columns are different. But wouldn't it be nice if the code can ignore it, like it ignores `{\"a\": []}`?\r\n\r\n\r\n--- \r\nBTW, \r\n> There's a check later in the code that makes sure that the columns are the right ones, so I don't think we need to check for {} here\r\n\r\nI remember the error happens around here:\r\nhttps://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L506-L507\r\nThe error says something like `arrays` and `schema` doesn't have the same length. And it's not very clear I passed a `{}`.\r\n\r\nedit: actual error message\r\n```\r\nFile \"site-packages/datasets/arrow_writer.py\", line 595, in write_batch\r\n pa_table = pa.Table.from_arrays(arrays, schema=schema)\r\n File \"pyarrow/table.pxi\", line 3557, in pyarrow.lib.Table.from_arrays\r\n File \"pyarrow/table.pxi\", line 1401, in pyarrow.lib._sanitize_arrays\r\nValueError: Schema and number of arrays unequal\r\n```",
"> But wouldn't it be nice if the code can ignore it, like it ignores {\"a\": []}?\r\n\r\nI think it would make things confusing because it doesn't follow our definition of a batch: \"the columns of a batch = the keys of the dict\". It would probably break certain behaviors as well. For example if you remove all the columns of a dataset (using `.remove_colums(...)` or `.map(..., remove_columns=...)`), the writer has to write 0 columns, and currently the only way to tell the writer to do so using `write_batch` is to pass `{}`.\r\n\r\n> The error says something like arrays and schema doesn't have the same length. And it's not very clear I passed a {}.\r\n\r\nYea the message can actually be improved indeed, it's definitely not clear. Maybe we can add a line right before the call `pa.Table.from_arrays` to make sure the keys of the batch match the field names of the schema"
] |
https://api.github.com/repos/huggingface/datasets/issues/3059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3059/comments | https://api.github.com/repos/huggingface/datasets/issues/3059/events | https://github.com/huggingface/datasets/pull/3059 | 1,022,620,057 | PR_kwDODunzps4tA54w | 3,059 | Fix task reloading from cache | [] | closed | false | null | 0 | 2021-10-11T12:03:04Z | 2021-10-11T12:23:39Z | 2021-10-11T12:23:39Z | null | When reloading a dataset from the cache when doing `map`, the tasks templates were kept instead of being updated regarding the output of the `map` function. This is an issue because we drop the tasks templates that are not compatible anymore after `map`, for example if a column of the template was removed.
This PR fixes this and for convenience introduces a decorator `@transmit_tasks` that takes care of doing this verification, similar to the `@transmit_format` decorator.
This should fix issue https://github.com/huggingface/datasets/issues/3047 cc @sgugger | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3059/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3059/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3059.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3059",
"merged_at": "2021-10-11T12:23:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3059.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3059"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2384/comments | https://api.github.com/repos/huggingface/datasets/issues/2384/events | https://github.com/huggingface/datasets/pull/2384 | 896,866,461 | MDExOlB1bGxSZXF1ZXN0NjQ4OTI4NTQ0 | 2,384 | Add args description to DatasetInfo | [] | closed | false | null | 2 | 2021-05-20T13:53:10Z | 2021-05-22T09:26:16Z | 2021-05-22T09:26:14Z | null | Closes #2354
I am not sure what `post_processed` and `post_processing_size` correspond to, so have left them empty for now. I also took a guess at some of the other fields like `dataset_size` vs `size_in_bytes`, so might have misunderstood their meaning. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2384/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2384/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2384.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2384",
"merged_at": "2021-05-22T09:26:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2384.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2384"
} | true | [
"Thanks for the suggestions! I've included them and made a few minor tweaks along the way",
"Please merge master into this branch to fix the CI, I just fixed metadata validation tests."
] |
https://api.github.com/repos/huggingface/datasets/issues/1491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1491/comments | https://api.github.com/repos/huggingface/datasets/issues/1491/events | https://github.com/huggingface/datasets/pull/1491 | 762,920,920 | MDExOlB1bGxSZXF1ZXN0NTM3NDIxMTc3 | 1,491 | added opus GNOME data | [] | closed | false | null | 1 | 2020-12-11T21:21:51Z | 2020-12-17T14:20:23Z | 2020-12-17T14:20:23Z | null | Dataset : http://opus.nlpl.eu/GNOME.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1491/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1491/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1491.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1491",
"merged_at": "2020-12-17T14:20:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1491.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1491"
} | true | [
"merging since the Ci is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/4743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4743/comments | https://api.github.com/repos/huggingface/datasets/issues/4743/events | https://github.com/huggingface/datasets/pull/4743 | 1,317,362,561 | PR_kwDODunzps48EUFs | 4,743 | Update map docs | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-07-25T20:59:35Z | 2022-07-27T16:22:04Z | 2022-07-27T16:10:04Z | null | This PR updates the `map` docs for processing text to include `return_tensors="np"` to make it run faster (see #4676). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4743/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4743/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4743.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4743",
"merged_at": "2022-07-27T16:10:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4743.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4743"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5479 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5479/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5479/comments | https://api.github.com/repos/huggingface/datasets/issues/5479/events | https://github.com/huggingface/datasets/issues/5479 | 1,560,357,590 | I_kwDODunzps5dASrW | 5,479 | audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated | [] | closed | false | null | 0 | 2023-01-27T20:01:22Z | 2023-01-29T05:23:14Z | 2023-01-29T05:23:14Z | null | ### Describe the bug
I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what could be missing/need to be updated in the one that doesn't work? On the remote env, libsndfile is 1.0.28 and ffmpeg is 4.2.1.
from datasets import load_dataset
ds = load_dataset("audiofolder", data_dir="...")
Here is the output (should be generating 400+ rows):
Downloading and preparing dataset audiofolder/default to ...
Downloading data files: 0%| | 0/2 [00:00<?, ?it/s]
Downloading data files: 0it [00:00, ?it/s]
Extracting data files: 0it [00:00, ?it/s]
Generating train split: 0 examples [00:00, ? examples/s]
Dataset audiofolder downloaded and prepared to ... Subsequent calls will reuse this data.
0%| | 0/1 [00:00<?, ?it/s]
DatasetDict({
train: Dataset({
features: ['audio', 'transcription'],
num_rows: 1
})
})
Here is my pip environment in the one that doesn't work (uses torch 1.11.a0 from shared env):
Package Version
------------------- -------------------
aiofiles 22.1.0
aiohttp 3.8.3
aiosignal 1.3.1
altair 4.2.1
anyio 3.6.2
appdirs 1.4.4
argcomplete 2.0.0
argon2-cffi 20.1.0
astunparse 1.6.3
async-timeout 4.0.2
attrs 21.2.0
audioread 3.0.0
backcall 0.2.0
bleach 4.0.0
certifi 2021.10.8
cffi 1.14.6
charset-normalizer 2.0.12
click 8.1.3
contourpy 1.0.7
cycler 0.11.0
datasets 2.9.0
debugpy 1.4.1
decorator 5.0.9
defusedxml 0.7.1
dill 0.3.6
distlib 0.3.4
entrypoints 0.3
evaluate 0.4.0
expecttest 0.1.3
fastapi 0.89.1
ffmpy 0.3.0
filelock 3.6.0
fonttools 4.38.0
frozenlist 1.3.3
fsspec 2023.1.0
future 0.18.2
gradio 3.16.2
h11 0.14.0
httpcore 0.16.3
httpx 0.23.3
huggingface-hub 0.12.0
idna 3.3
ipykernel 6.2.0
ipython 7.26.0
ipython-genutils 0.2.0
ipywidgets 7.6.3
jedi 0.18.0
Jinja2 3.0.1
jiwer 2.5.1
joblib 1.2.0
jsonschema 3.2.0
jupyter 1.0.0
jupyter-client 6.1.12
jupyter-console 6.4.0
jupyter-core 4.7.1
jupyterlab-pygments 0.1.2
jupyterlab-widgets 1.0.0
kiwisolver 1.4.4
Levenshtein 0.20.2
librosa 0.9.2
linkify-it-py 1.0.3
llvmlite 0.39.1
markdown-it-py 2.1.0
MarkupSafe 2.0.1
matplotlib 3.6.3
matplotlib-inline 0.1.2
mdit-py-plugins 0.3.3
mdurl 0.1.2
mistune 0.8.4
multidict 6.0.4
multiprocess 0.70.14
nbclient 0.5.4
nbconvert 6.1.0
nbformat 5.1.3
nest-asyncio 1.5.1
notebook 6.4.3
numba 0.56.4
numpy 1.20.3
orjson 3.8.5
packaging 21.0
pandas 1.5.3
pandocfilters 1.4.3
parso 0.8.2
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.4.0
pip 22.3.1
pipx 1.1.0
platformdirs 2.5.2
pooch 1.6.0
prometheus-client 0.11.0
prompt-toolkit 3.0.19
psutil 5.9.0
ptyprocess 0.7.0
pyarrow 10.0.1
pycparser 2.20
pycryptodome 3.16.0
pydantic 1.10.4
pydub 0.25.1
Pygments 2.10.0
pyparsing 2.4.7
pyrsistent 0.18.0
python-dateutil 2.8.2
python-multipart 0.0.5
pytz 2022.7.1
PyYAML 6.0
pyzmq 22.2.1
qtconsole 5.1.1
QtPy 1.10.0
rapidfuzz 2.13.7
regex 2022.10.31
requests 2.27.1
resampy 0.4.2
responses 0.18.0
rfc3986 1.5.0
scikit-learn 1.2.1
scipy 1.6.3
Send2Trash 1.8.0
setuptools 65.5.1
shiboken6 6.3.1
shiboken6-generator 6.3.1
six 1.16.0
sniffio 1.3.0
soundfile 0.11.0
starlette 0.22.0
terminado 0.11.0
testpath 0.5.0
threadpoolctl 3.1.0
tokenizers 0.13.2
toolz 0.12.0
torch 1.11.0a0+gitunknown
tornado 6.1
tqdm 4.64.1
traitlets 5.0.5
transformers 4.27.0.dev0
types-dataclasses 0.6.4
typing_extensions 4.1.1
uc-micro-py 1.0.1
urllib3 1.26.9
userpath 1.8.0
uvicorn 0.20.0
virtualenv 20.14.1
wcwidth 0.2.5
webencodings 0.5.1
websockets 10.4
wheel 0.37.1
widgetsnbextension 3.5.1
xxhash 3.2.0
yarl 1.8.2
### Steps to reproduce the bug
Create a pip environment with the packages listed above (make sure ffmpeg and libsndfile is installed with same versions listed above).
Create a custom audio dataset and load it in with load_dataset("audiofolder", ...)
### Expected behavior
load_dataset should create a dataset with 400+ rows.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.0
- PyArrow version: 10.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5479/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5479/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3823/comments | https://api.github.com/repos/huggingface/datasets/issues/3823/events | https://github.com/huggingface/datasets/issues/3823 | 1,159,497,844 | I_kwDODunzps5FHIh0 | 3,823 | 500 internal server error when trying to open a dataset composed of Zarr stores | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2022-03-04T10:37:14Z | 2022-03-08T09:47:39Z | 2022-03-08T09:47:39Z | null | ## Describe the bug
The dataset [openclimatefix/mrms](https://huggingface.co/datasets/openclimatefix/mrms) gives a 500 server error when trying to open it on the website, or through code.
The dataset doesn't have a loading script yet, and I did push two [xarray](https://docs.xarray.dev/en/stable/) Zarr stores of data there recentlyish. The Zarr stores are composed of lots of small files, which I am guessing is probably the problem, as we have another [OCF dataset](https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv) using xarray and Zarr, but with the Zarr stored on GCP public datasets instead of directly in HF datasets, and that one opens fine.
In general, we were hoping to use HF datasets to release some more public geospatial datasets as benchmarks, which are commonly stored as Zarr stores as they can be compressed well and deal with the multi-dimensional data and coordinates fairly easily compared to other formats, but with this error, I'm assuming we should try a different format?
For context, we are trying to have complete public model+data reimplementations of some SOTA weather and solar nowcasting models, like [MetNet, MetNet-2,](https://github.com/openclimatefix/metnet) [DGMR](https://github.com/openclimatefix/skillful_nowcasting), and [others](https://github.com/openclimatefix/graph_weather), which all have large, complex datasets.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("openclimatefix/mrms")
```
## Expected results
The dataset should be downloaded or open up
## Actual results
A 500 internal server error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.15.25-1-MANJARO-x86_64-with-glibc2.35
- Python version: 3.9.10
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3823/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3823/timeline | null | completed | null | null | false | [
"Hi @jacobbieker, thanks for reporting!\r\n\r\nI have transferred this issue to our Hub team and they are investigating it. I keep you informed. ",
"Hi @jacobbieker, we are investigating this issue on our side and we'll see if we can fix it, but please note that your repo is considered problematic for git. Here are the results of running https://github.com/github/git-sizer on it:\r\n\r\n```\r\nProcessing blobs: 147448 \r\nProcessing trees: 27 \r\nProcessing commits: 4 \r\nMatching commits to trees: 4 \r\nProcessing annotated tags: 0 \r\nProcessing references: 3 \r\n| Name | Value | Level of concern |\r\n| ---------------------------- | --------- | ------------------------------ |\r\n| Biggest objects | | |\r\n| * Trees | | |\r\n| * Maximum entries [1] | 167 k | !!!!!!!!!!!!!!!!!!!!!!!!!!!!!! |\r\n| | | |\r\n| Biggest checkouts | | |\r\n| * Number of files [2] | 189 k | *** |\r\n\r\n[1] aa057d2667c34c70c6146efc631f5c9917ff326e (refs/heads/main:2016.zarr/unknown)\r\n[2] 6897b7bf6440fdd16b2c39d08085a669e7eaa59d (refs/heads/main^{tree})\r\n```\r\n\r\nYou can check https://github.com/github/git-sizer for more information on how to avoid such pathological structures.",
"Hi, thanks for getting back to me so quick! And yeah, I figured that was probably the problem. I was going to try to delete the repo, but couldn't through the website, so if that's the easiest way to solve it, I can regenerate the dataset in a different format with less tiny files, and you guys can delete the repo as it is. Zarr just saves everything as lots of small files to make chunks easy to load, which is why I was preferring that format, but maybne that just doesn't work well for HF datasets.",
"Hi @jacobbieker,\r\n\r\nFor future use cases, our Hub team is still pondering whether to limit the maximum number of files per repo to avoid technical issues...\r\n\r\nOn the meantime, they have made a fix and your dataset is working: https://huggingface.co/datasets/openclimatefix/mrms"
] |
https://api.github.com/repos/huggingface/datasets/issues/4333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4333/comments | https://api.github.com/repos/huggingface/datasets/issues/4333/events | https://github.com/huggingface/datasets/pull/4333 | 1,234,038,705 | PR_kwDODunzps43uSuj | 4,333 | Adding eval metadata for Banking 77 | [] | closed | false | null | 1 | 2022-05-12T14:05:05Z | 2022-05-12T21:03:32Z | 2022-05-12T21:03:31Z | null | Adding eval metadata for Banking 77 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4333/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4333/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4333.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4333",
"merged_at": "2022-05-12T21:03:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4333.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4333"
} | true | [
"@lhoestq , Circle CI is giving me an error, saying that ['extended'] is a key that shouldn't be in the dataset metadata, but it was there before my modification (so I don't want to remove it)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1911/comments | https://api.github.com/repos/huggingface/datasets/issues/1911/events | https://github.com/huggingface/datasets/issues/1911 | 812,009,956 | MDU6SXNzdWU4MTIwMDk5NTY= | 1,911 | Saving processed dataset running infinitely | [] | open | false | null | 6 | 2021-02-19T13:09:19Z | 2021-02-23T07:34:44Z | null | null | I have a text dataset of size 220M.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.
filter() function was way to slow, so I used a hack to use pyarrow filter table function, which is damm fast. Mentioned [here](https://github.com/huggingface/datasets/issues/1796)
```dataset._data = dataset._data.filter(...)```
It took 1 hr for the filter.
Then i use `save_to_disk()` on processed dataset and it is running forever.
I have been waiting since 8 hrs, it has not written a single byte.
Infact it has actually read from disk more than 100GB, screenshot below shows the stats using `iotop`.
Second process is the one.
<img width="1672" alt="Screenshot 2021-02-19 at 6 36 53 PM" src="https://user-images.githubusercontent.com/20911334/108508197-7325d780-72e1-11eb-8369-7c057d137d81.png">
I am not able to figure out, whether this is some issue with dataset library or that it is due to my hack for filter() function. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1911/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1911/timeline | null | null | null | null | false | [
"@thomwolf @lhoestq can you guys please take a look and recommend some solution.",
"am suspicious of this thing? what's the purpose of this? pickling and unplickling\r\n`self = pickle.loads(pickle.dumps(self))`\r\n\r\n```\r\n def save_to_disk(self, dataset_path: str, fs=None):\r\n \"\"\"\r\n Saves a dataset to a dataset directory, or in a filesystem using either :class:`datasets.filesystem.S3FileSystem` or any implementation of ``fsspec.spec.AbstractFileSystem``.\r\n\r\n Args:\r\n dataset_path (``str``): path (e.g. ``dataset/train``) or remote uri (e.g. ``s3://my-bucket/dataset/train``) of the dataset directory where the dataset will be saved to\r\n fs (Optional[:class:`datasets.filesystem.S3FileSystem`,``fsspec.spec.AbstractFileSystem``], `optional`, defaults ``None``): instance of :class:`datasets.filesystem.S3FileSystem` or ``fsspec.spec.AbstractFileSystem`` used to download the files from remote filesystem.\r\n \"\"\"\r\n assert (\r\n not self.list_indexes()\r\n ), \"please remove all the indexes using `dataset.drop_index` before saving a dataset\"\r\n self = pickle.loads(pickle.dumps(self))\r\n ```",
"It's been 24 hours and sadly it's still running. With not a single byte written",
"Tried finding the root cause but was unsuccessful.\r\nI am using lazy tokenization with `dataset.set_transform()`, it works like a charm with almost same performance as pre-compute.",
"Hi ! This very probably comes from the hack you used.\r\n\r\nThe pickling line was added an a sanity check because save_to_disk uses the same assumptions as pickling for a dataset object. The main assumption is that memory mapped pyarrow tables must be reloadable from the disk. In your case it's not possible since you altered the pyarrow table.\r\nI would suggest you to rebuild a valid Dataset object from your new pyarrow table. To do so you must first save your new table to a file, and then make a new Dataset object from that arrow file.\r\n\r\nYou can save the raw arrow table (without all the `datasets.Datasets` metadata) by calling `map` with `cache_file_name=\"path/to/outut.arrow\"` and `function=None`. Having `function=None` makes the `map` write your dataset on disk with no data transformation.\r\n\r\nOnce you have your new arrow file, load it with `datasets.Dataset.from_file` to have a brand new Dataset object :)\r\n\r\nIn the future we'll have a better support for the fast filtering method from pyarrow so you don't have to do this very unpractical workaround. Since it breaks somes assumptions regarding the core behavior of Dataset objects, this is very discouraged.",
"Thanks, @lhoestq for your response. Will try your solution and let you know."
] |
https://api.github.com/repos/huggingface/datasets/issues/5157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5157/comments | https://api.github.com/repos/huggingface/datasets/issues/5157/events | https://github.com/huggingface/datasets/issues/5157 | 1,421,703,577 | I_kwDODunzps5UvXmZ | 5,157 | Consistent caching between python and jupyter | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2022-10-25T01:34:33Z | 2022-11-02T15:43:22Z | 2022-11-02T15:43:22Z | null | ### Feature request
I hope this is not my mistake, currently if I use `load_dataset` from a python session on a custom dataset to do the preprocessing, it will be saved in the cache and in other python sessions it will be loaded from the cache, however calling the same from a jupyter notebook does not work, meaning the preprocessing starts from scratch.
If adjusting the hashes is impossible, is there a way to manually set dataset fingerprint to "force" this behaviour?
### Motivation
If this is not already the case and I am doing something wrong, it would be useful to have the two fingerprints consistent so one can create the dataset once and then try small things on jupyter without preprocessing everything again.
### Your contribution
I am happy to try a PR if you give me some pointers where the changes should happen | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5157/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5157/timeline | null | completed | null | null | false | [
"Hi ! Maybe it's possible to have a consistent hash for a function defined in `__main__` and a function define in a notebook.\r\n\r\nHowever for functions imported from another location, pickle uses the location to identify the code, so in that case we can't do much I believe.\r\n\r\nWould it be ok for you if we only try to do this for functions in `__main__` / jupyter ?\r\n\r\nIf you'd like to contribute, you can read this part of the code and let me know if you have questions:\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/utils/py_utils.py#L617-L643\r\n\r\nI think the key here would be to also ignore the \"co_filename\" of functions defined in `__main__`",
"Seems like a good solution, I will start a PR and see if I understood the changes needed. Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2382/comments | https://api.github.com/repos/huggingface/datasets/issues/2382/events | https://github.com/huggingface/datasets/issues/2382 | 895,610,216 | MDU6SXNzdWU4OTU2MTAyMTY= | 2,382 | DuplicatedKeysError: FAILURE TO GENERATE DATASET ! load_dataset('head_qa', 'en') | [] | closed | false | null | 0 | 2021-05-19T15:49:48Z | 2021-05-30T13:26:16Z | 2021-05-30T13:26:16Z | null | Hello everyone,
I try to use head_qa dataset in [https://huggingface.co/datasets/viewer/?dataset=head_qa&config=en](url)
```
!pip install datasets
from datasets import load_dataset
dataset = load_dataset(
'head_qa', 'en')
```
When I write above load_dataset(.), it throws the following:
```
DuplicatedKeysError Traceback (most recent call last)
<ipython-input-6-ea87002d32f0> in <module>()
2 from datasets import load_dataset
3 dataset = load_dataset(
----> 4 'head_qa', 'en')
5 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
347 for hash, key in self.hkey_record:
348 if hash in tmp_record:
--> 349 raise DuplicatedKeysError(key)
350 else:
351 tmp_record.add(hash)
DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 1
Keys should be unique and deterministic in nature
```
How can I fix the error? Thanks
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2382/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2382/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3611/comments | https://api.github.com/repos/huggingface/datasets/issues/3611/events | https://github.com/huggingface/datasets/issues/3611 | 1,110,399,096 | I_kwDODunzps5CL1h4 | 3,611 | Indexing bug after dataset.select() | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-01-21T12:09:30Z | 2022-01-27T18:16:22Z | 2022-01-27T18:16:22Z | null | ## Describe the bug
A clear and concise description of what the bug is.
Dataset indexing is not working as expected after `dataset.select(range(100))`
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import datasets
task_to_keys = {
"cola": ("sentence", None),
"mnli": ("premise", "hypothesis"),
"mrpc": ("sentence1", "sentence2"),
"qnli": ("question", "sentence"),
"qqp": ("question1", "question2"),
"rte": ("sentence1", "sentence2"),
"sst2": ("sentence", None),
"stsb": ("sentence1", "sentence2"),
"wnli": ("sentence1", "sentence2"),
}
task_name = "sst2"
raw_datasets = datasets.load_dataset("glue", task_name)
train_dataset = raw_datasets["train"]
print("before select: ",train_dataset[-2:])
# before select: {'sentence': ['a patient viewer ', 'this new jangle of noise , mayhem and stupidity must be a serious contender for the title . '], 'label': [1, 0], 'idx': [67347, 67348]}
train_dataset = train_dataset.select(range(100))
print("after select: ",train_dataset[-2:])
# after select: {'sentence': [], 'label': [], 'idx': []}
```
link to colab: https://colab.research.google.com/drive/1LngeRC9f0jE7eSQ4Kh1cIeb411lRXQD-?usp=sharing
## Expected results
A clear and concise description of the expected results.
showing 98, 99 index data
## Actual results
Specify the actual results or traceback.
empty
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3611/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3611/timeline | null | completed | null | null | false | [
"Hi! Thanks for reporting! I've opened a PR with the fix."
] |
https://api.github.com/repos/huggingface/datasets/issues/3194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3194/comments | https://api.github.com/repos/huggingface/datasets/issues/3194/events | https://github.com/huggingface/datasets/pull/3194 | 1,041,999,535 | PR_kwDODunzps4t91Eg | 3,194 | Update link to Datasets Tagging app in Spaces | [] | closed | false | null | 0 | 2021-11-02T08:13:50Z | 2021-11-08T10:36:23Z | 2021-11-08T10:36:22Z | null | Fix #3193. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3194/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3194.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3194",
"merged_at": "2021-11-08T10:36:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3194.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3194"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/549/comments | https://api.github.com/repos/huggingface/datasets/issues/549/events | https://github.com/huggingface/datasets/pull/549 | 689,766,465 | MDExOlB1bGxSZXF1ZXN0NDc2Nzc0OTI1 | 549 | Fix bleurt logging import | [] | closed | false | null | 2 | 2020-09-01T03:01:25Z | 2020-09-03T18:04:46Z | 2020-09-03T09:04:20Z | null | Bleurt started throwing an error in some code we have.
This looks like the fix but...
It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems).
Any way for us to pin your metrics code so that they are guaranteed not to to change and possibly fail on repository changes?
Thanks (and also for your continued work on the lib...) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/549/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/549/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/549.diff",
"html_url": "https://github.com/huggingface/datasets/pull/549",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/549.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/549"
} | true | [
"That’s a good point that we started to discuss internally as well. We should pin the dataset en metrics code by default indeed.\r\nLet’s update this in the coming release.",
"Ok closed this with #567 and we are working on a more general solution to pin dataset version in #562 (should be in the coming release)."
] |
https://api.github.com/repos/huggingface/datasets/issues/4697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4697/comments | https://api.github.com/repos/huggingface/datasets/issues/4697/events | https://github.com/huggingface/datasets/issues/4697 | 1,307,332,253 | I_kwDODunzps5N7E6d | 4,697 | Trouble with streaming frgfm/imagenette vision dataset with TAR archive | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | 5 | 2022-07-18T02:51:09Z | 2022-08-01T15:10:57Z | 2022-08-01T15:10:57Z | null | ### Link
https://huggingface.co/datasets/frgfm/imagenette
### Description
Hello there :wave:
Thanks for the amazing work you've done with HF Datasets! I've just started playing with it, and managed to upload my first dataset. But for the second one, I'm having trouble with the preview since there is some archive extraction involved :sweat_smile:
Basically, I get a:
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://s3.amazonaws.com/fast-ai-imageclas/imagenette2.tgz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
```
I've tried several things and checked this issue https://github.com/huggingface/datasets/issues/4181 as well, but no luck so far!
Could you point me in the right direction please? :pray:
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4697/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4697/timeline | null | completed | null | null | false | [
"Hi @frgfm, thanks for reporting.\r\n\r\nAs the error message says, streaming mode is not supported out of the box when the dataset contains TAR archive files.\r\n\r\nTo make the dataset streamable, you have to use `dl_manager.iter_archive`.\r\n\r\nThere are several examples in other datasets, e.g. food101: https://huggingface.co/datasets/food101/blob/main/food101.py\r\n\r\nAnd yes, as the link you pointed out, for the streaming to be possible, the metadata file must be loaded before all of the images:\r\n- either this is the case when iterating the archive (and you get the metadata file before the images)\r\n- or you have to extract the metadata file by hand and upload it separately to the Hub",
"Hi @albertvillanova :wave:\r\n\r\nThanks! Yeah I saw that but since I didn't have any metadata, I wasn't sure whether I should create them myself.\r\n\r\nSo one last question:\r\nWhat is the metadata supposed to be for archives? The relative path of all files in it?\r\n_(Sorry I'm a bit confused since it's quite hard to debug using the single error message from the data preview :sweat_smile: )_",
"Hi @frgfm, streaming a dataset that contains a TAR file requires some tweaks because (contrary to ZIP files), tha TAR archive does not allow random access to any of the contained member files. Instead they have to be accessed sequentially (in the order in which they were put into the TAR file when created) and yielded.\r\n\r\nSo when iterating over the TAR file content, when an image file is found, we need to yield it (and not keeping it in memory, which will require huge RAM memory for large datasets). But when yielding an image file, we also need to yield with it what we call \"metadata\": the class label, and other textual information (for example, for audio files, sometimes we also add info such as the speaker ID, their sex, their age,...).\r\n\r\nAll this information usually is stored in what we call the metadata file: either a JSON or a CSV/TSV file.\r\n\r\nBut if this is also inside the TAR archive, we need to find this file in the first place when iterating the TAR archive, so that we already have this information when we find an image file and we can yield the image file and its metadata info.\r\n\r\nTherefore:\r\n- either the TAR archive contains the metadata file as the first member when iterating it (something we cannot change as it is done at the creation of the TAR file)\r\n- or if not, then we need to have the metadata file elsewhere\r\n - in these cases, what we do (if the dataset license allows it) is:\r\n - we download the TAR file locally, we extract the metadata file and we host the metadata on the Hub\r\n - we modify the dataset loading script so that it first downloads the metadata file (and reads it) and only then starts iterating the content of the TAR archive file\r\n\r\nSee an example of this process we recently did for \"google/fleurs\" (their metadata files for \"train\" were at the end of the TAR archives, after all audio files): https://huggingface.co/datasets/google/fleurs/discussions/4\r\n- we uploaded the metadata file to the Hub\r\n- we adapted the loading script to use it",
"Hi @albertvillanova :wave: \r\n\r\nThanks, since my last message, I went through the repo of https://huggingface.co/datasets/food101/blob/main/food101.py and managed to get it to work in the end :pray: \r\n\r\nHere it is: https://huggingface.co/datasets/frgfm/imagenette\r\n\r\nI appreciate you opening an issue to document the process, it might help a few!",
"Great to see that you manage to make your dataset streamable. :rocket: \r\n\r\nI'm closing this issue, as for the docs update there is another issue opened:\r\n- #4711"
] |
https://api.github.com/repos/huggingface/datasets/issues/319 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/319/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/319/comments | https://api.github.com/repos/huggingface/datasets/issues/319/events | https://github.com/huggingface/datasets/issues/319 | 646,792,487 | MDU6SXNzdWU2NDY3OTI0ODc= | 319 | Nested sequences with dicts | [] | closed | false | null | 1 | 2020-06-27T23:45:17Z | 2020-07-03T10:22:00Z | 2020-07-03T10:22:00Z | null | Am pretty much finished [adding a dataset](https://github.com/ghomasHudson/nlp/blob/DocRED/datasets/docred/docred.py) for [DocRED](https://github.com/thunlp/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`.
The original data is in this format:
```python
{
'title': "Title of wiki page",
'vertexSet': [
[
{ 'name': "mention_name",
'sent_id': "mention in which sentence",
'pos': ["postion of mention in a sentence"],
'type': "NER_type"},
{another mention}
],
[another entity]
]
...
}
```
So to represent this I've attempted to write:
```
...
features=nlp.Features({
"title": nlp.Value("string"),
"vertexSet": nlp.features.Sequence(nlp.features.Sequence({
"name": nlp.Value("string"),
"sent_id": nlp.Value("int32"),
"pos": nlp.features.Sequence(nlp.Value("int32")),
"type": nlp.Value("string"),
})),
...
}),
...
```
This is giving me the error:
```
pyarrow.lib.ArrowTypeError: Could not convert [{'pos': [[0,2], [2,4], [3,5]], "type": ["ORG", "ORG", "ORG"], "name": ["Lark Force", "Lark Force", "Lark Force", "sent_id": [0, 3, 4]}..... with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
Do we expect the pyarrow stuff to break when doing this deeper nesting? I've checked that it still works when you do `nlp.features.Sequence(nlp.features.Sequence(nlp.Value("string"))` or `nlp.features.Sequence({key:value,...})` just not nested sequences with a dict.
If it's not possible, I can always convert it to a shallower structure. I'd rather not change the DocRED authors' structure if I don't have to though. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/319/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/319/timeline | null | completed | null | null | false | [
"Oh yes, this is a backward compatibility feature with tensorflow_dataset in which a `Sequence` or `dict` is converted in a `dict` of `lists`, unfortunately it is not very intuitive, see here: https://github.com/huggingface/nlp/blob/master/src/nlp/features.py#L409\r\n\r\nTo avoid this behavior, you can just define the list in the feature with a simple list or a tuple (which is also simpler to write).\r\nIn your case, the features could be as follow:\r\n``` python\r\n...\r\nfeatures=nlp.Features({\r\n \"title\": nlp.Value(\"string\"),\r\n \"vertexSet\": [[{\r\n \"name\": nlp.Value(\"string\"),\r\n \"sent_id\": nlp.Value(\"int32\"),\r\n \"pos\": nlp.features.Sequence(nlp.Value(\"int32\")),\r\n \"type\": nlp.Value(\"string\"),\r\n }]],\r\n ...\r\n }),\r\n...\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/3093 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3093/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3093/comments | https://api.github.com/repos/huggingface/datasets/issues/3093/events | https://github.com/huggingface/datasets/issues/3093 | 1,027,262,124 | I_kwDODunzps49Osas | 3,093 | Error loading json dataset with multiple splits if keys in nested dicts have a different order | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-10-15T09:33:25Z | 2022-04-10T14:06:29Z | 2022-04-10T14:06:29Z | null | ## Describe the bug
Loading a json dataset with multiple splits that have nested dicts with keys in different order results in the error below.
If the keys in the nested dicts always have the same order or even if you just load a single split in which the nested dicts don't have the same order, everything works fine.
## Steps to reproduce the bug
Create two json files:
train.json
```
{"a": {"c": 8, "b": 5}}
{"a": {"b": 7, "c": 6}}
```
test.json
```
{"a": {"b": 1, "c": 2}}
{"a": {"b": 3, "c": 4}}
```
```python
from datasets import load_dataset
# Loading the files individually works (even though the keys in train.json don't have the same order)
load_dataset('json', data_files={"test": "test.json"})
load_dataset('json', data_files={"train": "train.json"})
# Loading both splits fails
load_dataset('json', data_files={"train": "train.json", "test": "test.json"})
```
## Expected results
Loading both splits should not give an error whether the nested dicts are have the same order or not.
## Actual results
```
>>> load_dataset('json', data_files={"train": "train.json", "test": "test.json"})
Using custom data configuration default-f1bc76fd07398c4c
Downloading and preparing dataset json/default to /home/dthulke/.cache/huggingface/datasets/json/default-f1bc76fd07398c4c/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 8839.42it/s]
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 477.82it/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/load.py", line 1632, in load_dataset
use_auth_token=use_auth_token,
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 608, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 697, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 1159, in _prepare_split
writer.write_table(table)
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/arrow_writer.py", line 428, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1596, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 592, in pyarrow.lib._sanitize_arrays
File "pyarrow/array.pxi", line 329, in pyarrow.lib.asarray
File "pyarrow/table.pxi", line 277, in pyarrow.lib.ChunkedArray.cast
File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/pyarrow/compute.py", line 297, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 527, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 337, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct
```
## Environment info
- `datasets` version: 1.13.2
- Platform: Linux-4.15.0-147-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3093/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3093/timeline | null | completed | null | null | false | [
"Hi, \r\n\r\neven Pandas, which is less strict compared to PyArrow when it comes to reading JSON, doesn't support different orderings:\r\n```python\r\nimport io\r\nimport pandas as pd\r\n\r\ns = \"\"\"\r\n{\"a\": {\"c\": 8, \"b\": 5}}\r\n{\"a\": {\"b\": 7, \"c\": 6}}\r\n\"\"\"\r\n\r\nbuffer = io.StringIO(s)\r\ndf = pd.read_json(buffer, lines=True)\r\n\r\nprint(df.shape[0]) # 0\r\n```\r\n\r\nSo we can't even fall back to Pandas in such cases.\r\n\r\nIt seems the only option is a script that recursively re-orders fields to enforce deterministic order:\r\n```python\r\nwith open(\"train.json\", \"r\") as fin:\r\n with open(\"train_reordered.json\", \"w\") as fout:\r\n for line in fin:\r\n obj_jsonl = json.loads(line.strip())\r\n fout.write(json.dumps(obj_jsonl, sort_keys=True) + \"\\n\")\r\n```",
"Fixed in #3575, so I'm closing this issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/4115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4115/comments | https://api.github.com/repos/huggingface/datasets/issues/4115/events | https://github.com/huggingface/datasets/issues/4115 | 1,194,907,555 | I_kwDODunzps5HONej | 4,115 | ImageFolder add option to ignore some folders like '.ipynb_checkpoints' | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 5 | 2022-04-06T17:29:43Z | 2022-06-01T13:04:16Z | 2022-06-01T13:04:16Z | null | **Is your feature request related to a problem? Please describe.**
I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if the dataset is very large.
**Describe the solution you'd like**
maybe have an option `ignore` or something .gitignore style
`dataset = load_dataset("imagefolder", data_dir="./data/original", ignore="regex?")`
**Describe alternatives you've considered**
Could filter out manually
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4115/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4115/timeline | null | completed | null | null | false | [
"Maybe it would be nice to ignore private dirs like this one (ones starting with `.`) by default. \r\n\r\nCC @mariosasko ",
"Maybe we can add a `ignore_hidden_files` flag to the builder configs of our packaged loaders (to be consistent across all of them), wdyt @lhoestq @albertvillanova? ",
"I think they should always ignore them actually ! Not sure if adding a flag would be helpful",
"@lhoestq But what if the user explicitly requests those files via regex?\r\n\r\n`glob.glob` ignores hidden files (files starting with \".\") by default unless they are explicitly requested, but fsspec's `glob` doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo?",
"> @lhoestq But what if the user explicitly requests those files via regex?\r\n\r\nUsually hidden files are meant to be ignored. If they are data files, they must be placed outside a hidden directory in the first place right ? I think it's more sensible to explain this than adding a flag.\r\n\r\n> glob.glob ignores hidden files (files starting with \".\") by default unless they are explicitly requested, but fsspec's glob doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo?\r\n\r\nAfter globbing using `fsspec`, we already ignore files that start with a `.` in `_resolve_single_pattern_locally` and `_resolve_single_pattern_in_dataset_repository`, I guess we can just account for parent directories as well ?\r\n\r\nWe could open an issue on `fsspec` but I think they won't change this since it's an important breaking change for them."
] |
https://api.github.com/repos/huggingface/datasets/issues/4043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4043/comments | https://api.github.com/repos/huggingface/datasets/issues/4043/events | https://github.com/huggingface/datasets/pull/4043 | 1,183,624,475 | PR_kwDODunzps41Kl0b | 4,043 | Create metric card for CUAD | [] | closed | false | null | 1 | 2022-03-28T15:38:58Z | 2022-03-29T15:20:56Z | 2022-03-29T15:15:19Z | null | Proposing a CUAD metric card | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4043/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4043/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4043.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4043",
"merged_at": "2022-03-29T15:15:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4043.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4043"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/770/comments | https://api.github.com/repos/huggingface/datasets/issues/770/events | https://github.com/huggingface/datasets/pull/770 | 731,445,222 | MDExOlB1bGxSZXF1ZXN0NTExNTQ5MTg1 | 770 | Fix custom builder caching | [] | closed | false | null | 0 | 2020-10-28T13:32:24Z | 2020-10-29T09:36:03Z | 2020-10-29T09:36:01Z | null | The cache directory of a dataset didn't take into account additional parameters that the user could specify such as `features` or any parameter of the builder configuration kwargs (ex: `encoding` for the `text` dataset).
To fix that, the cache directory name now has a suffix that depends on all of them.
Fix #730
Fix #750 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/770/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/770/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/770.diff",
"html_url": "https://github.com/huggingface/datasets/pull/770",
"merged_at": "2020-10-29T09:36:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/770.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/770"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4291/comments | https://api.github.com/repos/huggingface/datasets/issues/4291/events | https://github.com/huggingface/datasets/issues/4291 | 1,227,777,500 | I_kwDODunzps5JLmXc | 4,291 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 2 | 2022-05-06T12:03:27Z | 2022-05-09T08:25:58Z | 2022-05-09T08:25:58Z | null | ### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4291/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4291/timeline | null | completed | null | null | false | [
"Hi @leondz, thanks for reporting.\r\n\r\nIndeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.\r\n\r\nIn particular, in your case, that is due to the data file being TAR. This format is not streamable out of the box (it does not allow random access to the archived files), but we use a trick to allow streaming: using `dl_manager.iter_archive`.\r\n\r\nLet me know if you need some help: I could push a commit to your repo with the fix.",
"Ah, right! The preview is working now, but this explanation is good to know, thank you. I'll prefer formats with random file access supported in datasets.utils.extract in future, and try out this fix for the tarfiles :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/6060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6060/comments | https://api.github.com/repos/huggingface/datasets/issues/6060/events | https://github.com/huggingface/datasets/issues/6060 | 1,816,614,120 | I_kwDODunzps5sR1To | 6,060 | Dataset.map() execute twice when in PyTorch DDP mode | [] | open | false | null | 3 | 2023-07-22T05:06:43Z | 2023-07-24T19:29:55Z | null | null | ### Describe the bug
I use `torchrun --standalone --nproc_per_node=2 train.py` to start training. And write the code following the [docs](https://huggingface.co/docs/datasets/process#distributed-usage). The trick about using `torch.distributed.barrier()` to only execute map at the main process doesn't always work. When I am training model, it will map twice. When I am running a test for dataset and dataloader (just print the batches), it can work. Their code about loading dataset are same.
And on another server with 30 CPU cores, I use 2 GPUs and it can't work neither.
I have tried to use `rank` and `local_rank` to check, they all didn't make sense.
### Steps to reproduce the bug
use `torchrun --standalone --nproc_per_node=2 train.py` or `torchrun --standalone train.py` to run
This is my code:
```python
if args.distributed and world_size > 1:
if args.local_rank > 0:
print(f"Rank {args.rank}: Gpu {args.gpu} waiting for main process to perform the mapping", force=True)
torch.distributed.barrier()
print("Mapping dataset")
dataset = dataset.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=True), num_proc=8, desc="cut_reorder_keys")
dataset = dataset.map(lambda x: random_shift(x, shift_range=(-160, 0), feature_scale=16), num_proc=8, desc="random_shift")
dataset_test = dataset_test.map(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=False), num_proc=8, desc="cut_reorder_keys")
if args.local_rank == 0:
print("Mapping finished, loading results from main process")
torch.distributed.barrier()
```
### Expected behavior
Only the main process will execute `map`, while the sub process will load cache from disk.
### Environment info
server with 64 CPU cores (AMD Ryzen Threadripper PRO 5995WX 64-Cores) and 2 RTX 4090
- `python==3.9.16`
- `datasets==2.13.1`
- `torch==2.0.1+cu117`
- `22.04.1-Ubuntu`
server with 30 CPU cores (Intel(R) Xeon(R) Platinum 8375C CPU @ 2.90GHz) and 2 RTX 4090
- `python==3.9.0`
- `datasets==2.13.1`
- `torch==2.0.1+cu117`
- `Ubuntu 20.04` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6060/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6060/timeline | null | null | null | null | false | [
"Sorry for asking a duplicate question about `num_proc`, I searched the forum and find the solution.\r\n\r\nBut I still can't make the trick with `torch.distributed.barrier()` to only map at the main process work. The [post on forum]( https://discuss.huggingface.co/t/slow-processing-with-map-when-using-deepspeed-or-fairscale/7229/7) didn't help.",
"If it does the `map` twice then it means the hash of your map function is not some same between your two processes.\r\n\r\nCan you make sure your map functions have the same hash in different processes ?\r\n\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\n\r\nprint(Hasher.hash(lambda x: cut_reorder_keys(x, num_stations_list=args.num_stations_list, is_pad=True, is_train=True)))\r\nprint(Hasher.hash(lambda x: random_shift(x, shift_range=(-160, 0), feature_scale=16)))\r\n```\r\n\r\nYou can also set the fingerprint used to reload the resulting dataset by passing `new_finegrprint=` in `map`, see https://huggingface.co/docs/datasets/v2.13.1/en/about_cache#the-cache. This will force the different processes to use the same fingerprint used to locate the resulting dataset in the cache.",
"Thanks for help! I find the fingerprint between processes don't have same hash:\r\n```\r\nRank 0: Gpu 0 cut_reorder_keys fingerprint c7f47f40e9a67657\r\nRank 0: Gpu 0 random_shift fingerprint 240a0ce79831e7d4\r\n\r\nRank 1: Gpu 1 cut_reorder_keys fingerprint 20edd3d9cf284001\r\nRank 1: Gpu 1 random_shift fingerprint 819f7c1c18e7733f\r\n```\r\nBut my functions only process the example one by one and don't need rank or other arguments. After all it can work in the test for dataset and dataloader.\r\nI'll try to set `new_fingerprint` to see if it works and figure out the reason of different hash."
] |
https://api.github.com/repos/huggingface/datasets/issues/3872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3872/comments | https://api.github.com/repos/huggingface/datasets/issues/3872/events | https://github.com/huggingface/datasets/issues/3872 | 1,163,853,026 | I_kwDODunzps5FXvzi | 3,872 | HTTP error 504 Server Error: Gateway Time-out | [] | closed | false | null | 6 | 2022-03-09T12:03:37Z | 2022-03-15T16:19:50Z | 2022-03-15T16:19:50Z | null | I am trying to push a large dataset(450000+) records with the help of `push_to_hub()`
While pushing, it gives some error like this.
```
Traceback (most recent call last):
File "data_split_speech.py", line 159, in <module>
data_new_2.push_to_hub("user-name/dataset-name",private=True)
File "/opt/conda/lib/python3.8/site-packages/datasets/dataset_dict.py", line 951, in push_to_hub
repo_id, split, uploaded_size, dataset_nbytes = self[split]._push_parquet_shards_to_hub(
File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3556, in _push_parquet_shards_to_hub
api.upload_file(
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1017, in upload_file
raise err
File "/opt/conda/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 1008, in upload_file
r.raise_for_status()
File "/opt/conda/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/user-name/dataset-name/upload/main/data/train2-00041-of-00064.parquet
```
Can anyone help me to resolve this issue.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3872/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3872/timeline | null | completed | null | null | false | [
"is pushing directly with git (and git-lfs) an option for you?",
"I have installed git-lfs and doing this push with that\r\n",
"yes but is there any way you could try pushing with `git` command line directly instead of `push_to_hub`?",
"Okay. I didnt saved the dataset to my local machine. So, I processed the dataset and pushed it directly to the hub. I think I should try saving those dataset to my local machine by `save_to_disk` and then push it with git command line",
"cc @lhoestq @albertvillanova @LysandreJik because maybe I'm giving dumb advice here 😅 ",
"`push_to_hub` is the preferred way of uploading a dataset to the Hub, which can then be reloaded with `load_dataset`. Feel free to try again and see if the server is working as expected now. Maybe we can add a retry mechanism in the meantime to workaround 504 errors.\r\n\r\nRegarding `save_to_disk`, this must only be used for local serialization (because it's uncompressed and compatible with memory-mapping). If you upload a dataset saved with `save_to_disk` to the Hub, then to reload it you will have to download/clone the repository locally by yourself and use `load_from_disk`."
] |
https://api.github.com/repos/huggingface/datasets/issues/4086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4086/comments | https://api.github.com/repos/huggingface/datasets/issues/4086/events | https://github.com/huggingface/datasets/issues/4086 | 1,191,373,374 | I_kwDODunzps5HAuo- | 4,086 | Dataset viewer issue for McGill-NLP/feedbackQA | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 2 | 2022-04-04T07:27:20Z | 2022-04-04T22:29:53Z | 2022-04-04T08:01:45Z | null | ## Dataset viewer issue for '*McGill-NLP/feedbackQA*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/McGill-NLP/feedbackQA)*
*short description of the issue*
The dataset can be loaded correctly with `load_dataset` but the preview doesn't work. Error message:
```
Status code: 400
Exception: Status400Error
Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist.
```
Am I the one who added this dataset ? Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4086/timeline | null | completed | null | null | false | [
"Hi @cslizc, thanks for reporting.\r\n\r\nI have just forced the refresh of the corresponding cache and the preview is working now.",
"thank you so much"
] |
https://api.github.com/repos/huggingface/datasets/issues/5512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5512/comments | https://api.github.com/repos/huggingface/datasets/issues/5512/events | https://github.com/huggingface/datasets/pull/5512 | 1,576,142,432 | PR_kwDODunzps5JhtQy | 5,512 | Speed up batched PyTorch DataLoader | [] | closed | false | null | 9 | 2023-02-08T13:38:59Z | 2023-02-19T18:35:09Z | 2023-02-19T18:27:29Z | null | I implemented `__getitems__` to speed up batched data loading in PyTorch
close https://github.com/huggingface/datasets/issues/5505 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5512/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5512/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5512.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5512",
"merged_at": "2023-02-19T18:27:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5512.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5512"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008882 / 0.011353 (-0.002471) | 0.004562 / 0.011008 (-0.006446) | 0.100035 / 0.038508 (0.061527) | 0.030654 / 0.023109 (0.007545) | 0.298745 / 0.275898 (0.022847) | 0.356869 / 0.323480 (0.033389) | 0.007170 / 0.007986 (-0.000815) | 0.003471 / 0.004328 (-0.000858) | 0.077975 / 0.004250 (0.073725) | 0.037861 / 0.037052 (0.000809) | 0.311643 / 0.258489 (0.053154) | 0.343504 / 0.293841 (0.049663) | 0.033768 / 0.128546 (-0.094778) | 0.011342 / 0.075646 (-0.064304) | 0.323953 / 0.419271 (-0.095319) | 0.040818 / 0.043533 (-0.002715) | 0.298492 / 0.255139 (0.043353) | 0.327292 / 0.283200 (0.044092) | 0.088423 / 0.141683 (-0.053260) | 1.489520 / 1.452155 (0.037366) | 1.532962 / 1.492716 (0.040245) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223654 / 0.018006 (0.205647) | 0.415134 / 0.000490 (0.414644) | 0.007394 / 0.000200 (0.007194) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023616 / 0.037411 (-0.013795) | 0.096652 / 0.014526 (0.082126) | 0.105239 / 0.176557 (-0.071318) | 0.148637 / 0.737135 (-0.588498) | 0.107937 / 0.296338 (-0.188402) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426816 / 0.215209 (0.211607) | 4.241533 / 2.077655 (2.163878) | 1.946493 / 1.504120 (0.442373) | 1.735765 / 1.541195 (0.194570) | 1.781424 / 1.468490 (0.312934) | 0.688082 / 4.584777 (-3.896694) | 3.396444 / 3.745712 (-0.349268) | 1.920333 / 5.269862 (-3.349528) | 1.293833 / 4.565676 (-3.271843) | 0.081967 / 0.424275 (-0.342308) | 0.012911 / 0.007607 (0.005304) | 0.536928 / 0.226044 (0.310884) | 5.452327 / 2.268929 (3.183399) | 2.505785 / 55.444624 (-52.938840) | 2.173627 / 6.876477 (-4.702850) | 2.119978 / 2.142072 (-0.022095) | 0.809012 / 4.805227 (-3.996215) | 0.149124 / 6.500664 (-6.351540) | 0.066008 / 0.075469 (-0.009461) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215702 / 1.841788 (-0.626085) | 13.757525 / 8.074308 (5.683217) | 13.999208 / 10.191392 (3.807816) | 0.164875 / 0.680424 (-0.515549) | 0.028517 / 0.534201 (-0.505684) | 0.394829 / 0.579283 (-0.184454) | 0.404962 / 0.434364 (-0.029401) | 0.484455 / 0.540337 (-0.055882) | 0.575008 / 1.386936 (-0.811928) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006754 / 0.011353 (-0.004598) | 0.004579 / 0.011008 (-0.006430) | 0.076617 / 0.038508 (0.038109) | 0.027902 / 0.023109 (0.004793) | 0.346278 / 0.275898 (0.070380) | 0.398060 / 0.323480 (0.074580) | 0.004938 / 0.007986 (-0.003047) | 0.004681 / 0.004328 (0.000353) | 0.076336 / 0.004250 (0.072086) | 0.038018 / 0.037052 (0.000966) | 0.358701 / 0.258489 (0.100212) | 0.408413 / 0.293841 (0.114572) | 0.031772 / 0.128546 (-0.096774) | 0.011604 / 0.075646 (-0.064042) | 0.085964 / 0.419271 (-0.333308) | 0.042030 / 0.043533 (-0.001502) | 0.343568 / 0.255139 (0.088429) | 0.381805 / 0.283200 (0.098605) | 0.090759 / 0.141683 (-0.050924) | 1.504553 / 1.452155 (0.052398) | 1.594006 / 1.492716 (0.101289) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227395 / 0.018006 (0.209389) | 0.403097 / 0.000490 (0.402608) | 0.000413 / 0.000200 (0.000213) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024693 / 0.037411 (-0.012718) | 0.100470 / 0.014526 (0.085944) | 0.108481 / 0.176557 (-0.068076) | 0.142791 / 0.737135 (-0.594345) | 0.109949 / 0.296338 (-0.186389) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443674 / 0.215209 (0.228465) | 4.412207 / 2.077655 (2.334553) | 2.073752 / 1.504120 (0.569632) | 1.863153 / 1.541195 (0.321958) | 1.940063 / 1.468490 (0.471573) | 0.696456 / 4.584777 (-3.888321) | 3.422120 / 3.745712 (-0.323592) | 1.902579 / 5.269862 (-3.367282) | 1.184948 / 4.565676 (-3.380729) | 0.083079 / 0.424275 (-0.341196) | 0.012649 / 0.007607 (0.005042) | 0.542035 / 0.226044 (0.315991) | 5.421826 / 2.268929 (3.152897) | 2.525092 / 55.444624 (-52.919532) | 2.177144 / 6.876477 (-4.699332) | 2.225224 / 2.142072 (0.083151) | 0.804739 / 4.805227 (-4.000488) | 0.151000 / 6.500664 (-6.349664) | 0.066987 / 0.075469 (-0.008482) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277199 / 1.841788 (-0.564589) | 14.184146 / 8.074308 (6.109838) | 13.413348 / 10.191392 (3.221956) | 0.128551 / 0.680424 (-0.551872) | 0.016461 / 0.534201 (-0.517740) | 0.379963 / 0.579283 (-0.199320) | 0.381350 / 0.434364 (-0.053014) | 0.439044 / 0.540337 (-0.101293) | 0.521559 / 1.386936 (-0.865377) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008876 / 0.011353 (-0.002477) | 0.004629 / 0.011008 (-0.006379) | 0.101697 / 0.038508 (0.063189) | 0.030373 / 0.023109 (0.007264) | 0.302206 / 0.275898 (0.026308) | 0.365835 / 0.323480 (0.042355) | 0.007877 / 0.007986 (-0.000109) | 0.004473 / 0.004328 (0.000144) | 0.077334 / 0.004250 (0.073084) | 0.038066 / 0.037052 (0.001014) | 0.308064 / 0.258489 (0.049575) | 0.347329 / 0.293841 (0.053488) | 0.034478 / 0.128546 (-0.094068) | 0.011651 / 0.075646 (-0.063995) | 0.323481 / 0.419271 (-0.095791) | 0.043515 / 0.043533 (-0.000018) | 0.299885 / 0.255139 (0.044746) | 0.328959 / 0.283200 (0.045760) | 0.095308 / 0.141683 (-0.046375) | 1.474058 / 1.452155 (0.021903) | 1.535335 / 1.492716 (0.042619) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.197416 / 0.018006 (0.179410) | 0.421935 / 0.000490 (0.421446) | 0.003490 / 0.000200 (0.003290) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024519 / 0.037411 (-0.012892) | 0.100710 / 0.014526 (0.086185) | 0.104520 / 0.176557 (-0.072036) | 0.142048 / 0.737135 (-0.595087) | 0.109274 / 0.296338 (-0.187064) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408766 / 0.215209 (0.193557) | 4.101720 / 2.077655 (2.024065) | 1.812375 / 1.504120 (0.308256) | 1.605819 / 1.541195 (0.064624) | 1.688923 / 1.468490 (0.220433) | 0.691198 / 4.584777 (-3.893579) | 3.422137 / 3.745712 (-0.323575) | 1.921318 / 5.269862 (-3.348544) | 1.168770 / 4.565676 (-3.396906) | 0.082840 / 0.424275 (-0.341435) | 0.012740 / 0.007607 (0.005133) | 0.524333 / 0.226044 (0.298289) | 5.258077 / 2.268929 (2.989149) | 2.273177 / 55.444624 (-53.171447) | 1.931919 / 6.876477 (-4.944558) | 1.988415 / 2.142072 (-0.153658) | 0.812227 / 4.805227 (-3.993000) | 0.150043 / 6.500664 (-6.350622) | 0.066422 / 0.075469 (-0.009047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.188069 / 1.841788 (-0.653718) | 13.942681 / 8.074308 (5.868373) | 14.104658 / 10.191392 (3.913266) | 0.151966 / 0.680424 (-0.528458) | 0.028833 / 0.534201 (-0.505368) | 0.395125 / 0.579283 (-0.184158) | 0.408512 / 0.434364 (-0.025852) | 0.487587 / 0.540337 (-0.052751) | 0.570023 / 1.386936 (-0.816913) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006860 / 0.011353 (-0.004493) | 0.004582 / 0.011008 (-0.006426) | 0.079902 / 0.038508 (0.041394) | 0.027565 / 0.023109 (0.004456) | 0.341393 / 0.275898 (0.065495) | 0.378911 / 0.323480 (0.055431) | 0.005847 / 0.007986 (-0.002138) | 0.004681 / 0.004328 (0.000353) | 0.079422 / 0.004250 (0.075171) | 0.039135 / 0.037052 (0.002083) | 0.342026 / 0.258489 (0.083537) | 0.387510 / 0.293841 (0.093669) | 0.031999 / 0.128546 (-0.096547) | 0.011782 / 0.075646 (-0.063865) | 0.088563 / 0.419271 (-0.330709) | 0.042435 / 0.043533 (-0.001098) | 0.343055 / 0.255139 (0.087916) | 0.367437 / 0.283200 (0.084237) | 0.091578 / 0.141683 (-0.050104) | 1.506828 / 1.452155 (0.054673) | 1.599590 / 1.492716 (0.106874) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217939 / 0.018006 (0.199932) | 0.408352 / 0.000490 (0.407863) | 0.000394 / 0.000200 (0.000194) | 0.000063 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026344 / 0.037411 (-0.011067) | 0.102968 / 0.014526 (0.088442) | 0.110340 / 0.176557 (-0.066217) | 0.145696 / 0.737135 (-0.591439) | 0.111632 / 0.296338 (-0.184707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440764 / 0.215209 (0.225555) | 4.423179 / 2.077655 (2.345524) | 2.057016 / 1.504120 (0.552896) | 1.848741 / 1.541195 (0.307546) | 1.939827 / 1.468490 (0.471337) | 0.699370 / 4.584777 (-3.885407) | 3.472521 / 3.745712 (-0.273191) | 3.232557 / 5.269862 (-2.037305) | 1.755534 / 4.565676 (-2.810143) | 0.083469 / 0.424275 (-0.340807) | 0.012980 / 0.007607 (0.005373) | 0.557662 / 0.226044 (0.331618) | 5.435657 / 2.268929 (3.166729) | 2.545106 / 55.444624 (-52.899519) | 2.168047 / 6.876477 (-4.708430) | 2.234070 / 2.142072 (0.091997) | 0.804662 / 4.805227 (-4.000565) | 0.152832 / 6.500664 (-6.347833) | 0.069372 / 0.075469 (-0.006097) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.299189 / 1.841788 (-0.542598) | 14.752880 / 8.074308 (6.678572) | 13.607676 / 10.191392 (3.416284) | 0.150773 / 0.680424 (-0.529650) | 0.016701 / 0.534201 (-0.517500) | 0.379507 / 0.579283 (-0.199776) | 0.389401 / 0.434364 (-0.044963) | 0.444199 / 0.540337 (-0.096139) | 0.524264 / 1.386936 (-0.862672) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008694 / 0.011353 (-0.002659) | 0.004549 / 0.011008 (-0.006459) | 0.101164 / 0.038508 (0.062656) | 0.029644 / 0.023109 (0.006535) | 0.294849 / 0.275898 (0.018950) | 0.366755 / 0.323480 (0.043275) | 0.007205 / 0.007986 (-0.000780) | 0.004255 / 0.004328 (-0.000074) | 0.077433 / 0.004250 (0.073183) | 0.038024 / 0.037052 (0.000972) | 0.310380 / 0.258489 (0.051891) | 0.347093 / 0.293841 (0.053252) | 0.033232 / 0.128546 (-0.095314) | 0.011404 / 0.075646 (-0.064242) | 0.323341 / 0.419271 (-0.095930) | 0.040586 / 0.043533 (-0.002946) | 0.296083 / 0.255139 (0.040944) | 0.321870 / 0.283200 (0.038671) | 0.087377 / 0.141683 (-0.054306) | 1.466869 / 1.452155 (0.014715) | 1.514763 / 1.492716 (0.022046) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.010272 / 0.018006 (-0.007734) | 0.414645 / 0.000490 (0.414155) | 0.003730 / 0.000200 (0.003530) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024093 / 0.037411 (-0.013318) | 0.098718 / 0.014526 (0.084192) | 0.105526 / 0.176557 (-0.071030) | 0.141578 / 0.737135 (-0.595557) | 0.109679 / 0.296338 (-0.186660) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412907 / 0.215209 (0.197698) | 4.134934 / 2.077655 (2.057280) | 1.881180 / 1.504120 (0.377060) | 1.693207 / 1.541195 (0.152012) | 1.753725 / 1.468490 (0.285235) | 0.693077 / 4.584777 (-3.891700) | 3.367409 / 3.745712 (-0.378303) | 2.749035 / 5.269862 (-2.520827) | 1.565015 / 4.565676 (-3.000662) | 0.082609 / 0.424275 (-0.341666) | 0.012500 / 0.007607 (0.004892) | 0.523619 / 0.226044 (0.297575) | 5.250188 / 2.268929 (2.981259) | 2.314255 / 55.444624 (-53.130369) | 1.962357 / 6.876477 (-4.914120) | 2.020632 / 2.142072 (-0.121441) | 0.812504 / 4.805227 (-3.992724) | 0.149921 / 6.500664 (-6.350743) | 0.065816 / 0.075469 (-0.009653) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.230811 / 1.841788 (-0.610977) | 14.008566 / 8.074308 (5.934258) | 14.371285 / 10.191392 (4.179893) | 0.166323 / 0.680424 (-0.514101) | 0.029702 / 0.534201 (-0.504499) | 0.408629 / 0.579283 (-0.170654) | 0.410529 / 0.434364 (-0.023835) | 0.484482 / 0.540337 (-0.055855) | 0.572360 / 1.386936 (-0.814576) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006873 / 0.011353 (-0.004480) | 0.004609 / 0.011008 (-0.006400) | 0.075492 / 0.038508 (0.036984) | 0.028560 / 0.023109 (0.005450) | 0.340321 / 0.275898 (0.064423) | 0.376758 / 0.323480 (0.053278) | 0.005271 / 0.007986 (-0.002715) | 0.004786 / 0.004328 (0.000457) | 0.074843 / 0.004250 (0.070592) | 0.041072 / 0.037052 (0.004019) | 0.339952 / 0.258489 (0.081463) | 0.384375 / 0.293841 (0.090534) | 0.031771 / 0.128546 (-0.096775) | 0.011607 / 0.075646 (-0.064039) | 0.084338 / 0.419271 (-0.334933) | 0.042251 / 0.043533 (-0.001282) | 0.338904 / 0.255139 (0.083765) | 0.365360 / 0.283200 (0.082160) | 0.093151 / 0.141683 (-0.048532) | 1.449833 / 1.452155 (-0.002322) | 1.601946 / 1.492716 (0.109229) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225149 / 0.018006 (0.207142) | 0.409855 / 0.000490 (0.409365) | 0.000384 / 0.000200 (0.000184) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025914 / 0.037411 (-0.011497) | 0.100443 / 0.014526 (0.085917) | 0.108557 / 0.176557 (-0.067999) | 0.150338 / 0.737135 (-0.586798) | 0.111472 / 0.296338 (-0.184866) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440221 / 0.215209 (0.225012) | 4.409268 / 2.077655 (2.331613) | 2.096008 / 1.504120 (0.591888) | 1.849443 / 1.541195 (0.308248) | 1.934901 / 1.468490 (0.466410) | 0.704072 / 4.584777 (-3.880705) | 3.371370 / 3.745712 (-0.374343) | 3.185478 / 5.269862 (-2.084384) | 1.514541 / 4.565676 (-3.051135) | 0.083724 / 0.424275 (-0.340551) | 0.012674 / 0.007607 (0.005067) | 0.542155 / 0.226044 (0.316111) | 5.413456 / 2.268929 (3.144528) | 2.508567 / 55.444624 (-52.936057) | 2.163235 / 6.876477 (-4.713242) | 2.193914 / 2.142072 (0.051842) | 0.810955 / 4.805227 (-3.994272) | 0.152769 / 6.500664 (-6.347895) | 0.068009 / 0.075469 (-0.007460) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272511 / 1.841788 (-0.569276) | 14.334861 / 8.074308 (6.260553) | 13.555445 / 10.191392 (3.364053) | 0.160520 / 0.680424 (-0.519904) | 0.018363 / 0.534201 (-0.515838) | 0.384937 / 0.579283 (-0.194346) | 0.409138 / 0.434364 (-0.025225) | 0.484037 / 0.540337 (-0.056300) | 0.565595 / 1.386936 (-0.821341) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010077 / 0.011353 (-0.001276) | 0.005650 / 0.011008 (-0.005359) | 0.101285 / 0.038508 (0.062777) | 0.039571 / 0.023109 (0.016462) | 0.291855 / 0.275898 (0.015957) | 0.363582 / 0.323480 (0.040102) | 0.008513 / 0.007986 (0.000527) | 0.004472 / 0.004328 (0.000144) | 0.077314 / 0.004250 (0.073064) | 0.050707 / 0.037052 (0.013654) | 0.317282 / 0.258489 (0.058792) | 0.342348 / 0.293841 (0.048507) | 0.042951 / 0.128546 (-0.085595) | 0.012295 / 0.075646 (-0.063351) | 0.337269 / 0.419271 (-0.082003) | 0.048953 / 0.043533 (0.005420) | 0.292547 / 0.255139 (0.037408) | 0.325436 / 0.283200 (0.042236) | 0.111859 / 0.141683 (-0.029824) | 1.501958 / 1.452155 (0.049804) | 1.522281 / 1.492716 (0.029565) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011775 / 0.018006 (-0.006231) | 0.513283 / 0.000490 (0.512793) | 0.002941 / 0.000200 (0.002741) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028702 / 0.037411 (-0.008710) | 0.108465 / 0.014526 (0.093940) | 0.121806 / 0.176557 (-0.054750) | 0.158424 / 0.737135 (-0.578712) | 0.128077 / 0.296338 (-0.168262) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395392 / 0.215209 (0.180183) | 3.944138 / 2.077655 (1.866483) | 1.773698 / 1.504120 (0.269578) | 1.588907 / 1.541195 (0.047712) | 1.697794 / 1.468490 (0.229304) | 0.690281 / 4.584777 (-3.894496) | 3.819661 / 3.745712 (0.073948) | 3.228006 / 5.269862 (-2.041856) | 1.755625 / 4.565676 (-2.810052) | 0.083169 / 0.424275 (-0.341106) | 0.012337 / 0.007607 (0.004730) | 0.504730 / 0.226044 (0.278686) | 5.016916 / 2.268929 (2.747988) | 2.245484 / 55.444624 (-53.199141) | 1.911682 / 6.876477 (-4.964795) | 1.957659 / 2.142072 (-0.184413) | 0.818361 / 4.805227 (-3.986866) | 0.162386 / 6.500664 (-6.338279) | 0.062461 / 0.075469 (-0.013008) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197654 / 1.841788 (-0.644134) | 15.465611 / 8.074308 (7.391303) | 14.409126 / 10.191392 (4.217734) | 0.171776 / 0.680424 (-0.508647) | 0.028749 / 0.534201 (-0.505452) | 0.439666 / 0.579283 (-0.139618) | 0.445159 / 0.434364 (0.010795) | 0.543992 / 0.540337 (0.003655) | 0.643911 / 1.386936 (-0.743025) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007036 / 0.011353 (-0.004317) | 0.005273 / 0.011008 (-0.005735) | 0.075314 / 0.038508 (0.036806) | 0.033075 / 0.023109 (0.009966) | 0.350133 / 0.275898 (0.074235) | 0.399366 / 0.323480 (0.075886) | 0.005945 / 0.007986 (-0.002041) | 0.004276 / 0.004328 (-0.000052) | 0.074975 / 0.004250 (0.070725) | 0.051758 / 0.037052 (0.014706) | 0.355077 / 0.258489 (0.096588) | 0.430296 / 0.293841 (0.136455) | 0.036257 / 0.128546 (-0.092290) | 0.012376 / 0.075646 (-0.063270) | 0.087441 / 0.419271 (-0.331830) | 0.049066 / 0.043533 (0.005534) | 0.339867 / 0.255139 (0.084728) | 0.384379 / 0.283200 (0.101179) | 0.104843 / 0.141683 (-0.036840) | 1.498897 / 1.452155 (0.046742) | 1.551400 / 1.492716 (0.058684) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.334504 / 0.018006 (0.316498) | 0.516551 / 0.000490 (0.516061) | 0.000450 / 0.000200 (0.000250) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029313 / 0.037411 (-0.008099) | 0.110667 / 0.014526 (0.096141) | 0.124001 / 0.176557 (-0.052556) | 0.159154 / 0.737135 (-0.577981) | 0.129503 / 0.296338 (-0.166836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416749 / 0.215209 (0.201540) | 4.171163 / 2.077655 (2.093508) | 1.981071 / 1.504120 (0.476951) | 1.788303 / 1.541195 (0.247108) | 1.912118 / 1.468490 (0.443628) | 0.708764 / 4.584777 (-3.876013) | 3.815222 / 3.745712 (0.069510) | 2.121633 / 5.269862 (-3.148229) | 1.347866 / 4.565676 (-3.217811) | 0.086340 / 0.424275 (-0.337935) | 0.012646 / 0.007607 (0.005039) | 0.525286 / 0.226044 (0.299241) | 5.254922 / 2.268929 (2.985994) | 2.488743 / 55.444624 (-52.955881) | 2.128069 / 6.876477 (-4.748408) | 2.180358 / 2.142072 (0.038286) | 0.841011 / 4.805227 (-3.964216) | 0.168732 / 6.500664 (-6.331932) | 0.065559 / 0.075469 (-0.009910) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.270518 / 1.841788 (-0.571270) | 15.557563 / 8.074308 (7.483255) | 13.660757 / 10.191392 (3.469365) | 0.185636 / 0.680424 (-0.494788) | 0.018152 / 0.534201 (-0.516049) | 0.423553 / 0.579283 (-0.155730) | 0.412718 / 0.434364 (-0.021646) | 0.528455 / 0.540337 (-0.011882) | 0.635274 / 1.386936 (-0.751662) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011194 / 0.011353 (-0.000159) | 0.006344 / 0.011008 (-0.004664) | 0.122013 / 0.038508 (0.083505) | 0.044323 / 0.023109 (0.021214) | 0.356665 / 0.275898 (0.080767) | 0.439871 / 0.323480 (0.116391) | 0.010694 / 0.007986 (0.002709) | 0.004648 / 0.004328 (0.000320) | 0.091140 / 0.004250 (0.086890) | 0.052457 / 0.037052 (0.015404) | 0.369282 / 0.258489 (0.110793) | 0.403279 / 0.293841 (0.109438) | 0.054075 / 0.128546 (-0.074472) | 0.014484 / 0.075646 (-0.061162) | 0.407932 / 0.419271 (-0.011340) | 0.060681 / 0.043533 (0.017148) | 0.350889 / 0.255139 (0.095750) | 0.392041 / 0.283200 (0.108841) | 0.121252 / 0.141683 (-0.020431) | 1.809527 / 1.452155 (0.357373) | 1.835141 / 1.492716 (0.342425) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227372 / 0.018006 (0.209366) | 0.481908 / 0.000490 (0.481418) | 0.007262 / 0.000200 (0.007062) | 0.000148 / 0.000054 (0.000093) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031039 / 0.037411 (-0.006372) | 0.133947 / 0.014526 (0.119421) | 0.141935 / 0.176557 (-0.034622) | 0.197854 / 0.737135 (-0.539281) | 0.152393 / 0.296338 (-0.143945) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.517400 / 0.215209 (0.302191) | 4.899972 / 2.077655 (2.822317) | 2.171023 / 1.504120 (0.666903) | 2.008706 / 1.541195 (0.467511) | 1.988777 / 1.468490 (0.520287) | 0.859872 / 4.584777 (-3.724905) | 4.673923 / 3.745712 (0.928211) | 2.703189 / 5.269862 (-2.566672) | 1.891680 / 4.565676 (-2.673997) | 0.109601 / 0.424275 (-0.314674) | 0.014622 / 0.007607 (0.007015) | 0.618990 / 0.226044 (0.392946) | 6.255608 / 2.268929 (3.986679) | 2.822199 / 55.444624 (-52.622425) | 2.457684 / 6.876477 (-4.418793) | 2.500041 / 2.142072 (0.357968) | 1.054529 / 4.805227 (-3.750698) | 0.209501 / 6.500664 (-6.291163) | 0.074929 / 0.075469 (-0.000540) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.532780 / 1.841788 (-0.309008) | 19.159455 / 8.074308 (11.085147) | 17.817063 / 10.191392 (7.625671) | 0.194078 / 0.680424 (-0.486346) | 0.038211 / 0.534201 (-0.495990) | 0.537366 / 0.579283 (-0.041917) | 0.538995 / 0.434364 (0.104631) | 0.679431 / 0.540337 (0.139094) | 0.801960 / 1.386936 (-0.584976) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008729 / 0.011353 (-0.002624) | 0.005711 / 0.011008 (-0.005297) | 0.091570 / 0.038508 (0.053062) | 0.039805 / 0.023109 (0.016696) | 0.413507 / 0.275898 (0.137609) | 0.456342 / 0.323480 (0.132862) | 0.006201 / 0.007986 (-0.001785) | 0.009700 / 0.004328 (0.005372) | 0.089146 / 0.004250 (0.084896) | 0.057543 / 0.037052 (0.020490) | 0.420806 / 0.258489 (0.162317) | 0.471962 / 0.293841 (0.178121) | 0.043940 / 0.128546 (-0.084606) | 0.014457 / 0.075646 (-0.061190) | 0.106674 / 0.419271 (-0.312598) | 0.058930 / 0.043533 (0.015397) | 0.419111 / 0.255139 (0.163972) | 0.452974 / 0.283200 (0.169774) | 0.124573 / 0.141683 (-0.017110) | 1.864753 / 1.452155 (0.412599) | 1.935387 / 1.492716 (0.442670) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.275657 / 0.018006 (0.257651) | 0.498096 / 0.000490 (0.497606) | 0.000480 / 0.000200 (0.000280) | 0.000066 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034377 / 0.037411 (-0.003035) | 0.138050 / 0.014526 (0.123524) | 0.153718 / 0.176557 (-0.022838) | 0.201445 / 0.737135 (-0.535690) | 0.160346 / 0.296338 (-0.135992) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.540670 / 0.215209 (0.325461) | 5.376291 / 2.077655 (3.298636) | 2.581799 / 1.504120 (1.077679) | 2.328858 / 1.541195 (0.787663) | 2.446458 / 1.468490 (0.977968) | 0.923005 / 4.584777 (-3.661772) | 4.815977 / 3.745712 (1.070265) | 4.205725 / 5.269862 (-1.064137) | 2.400466 / 4.565676 (-2.165211) | 0.107207 / 0.424275 (-0.317068) | 0.015427 / 0.007607 (0.007819) | 0.657267 / 0.226044 (0.431222) | 6.491256 / 2.268929 (4.222327) | 3.179099 / 55.444624 (-52.265525) | 2.722434 / 6.876477 (-4.154042) | 2.788202 / 2.142072 (0.646129) | 1.060016 / 4.805227 (-3.745211) | 0.206899 / 6.500664 (-6.293766) | 0.077868 / 0.075469 (0.002399) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.567894 / 1.841788 (-0.273893) | 19.314330 / 8.074308 (11.240022) | 17.597614 / 10.191392 (7.406222) | 0.195777 / 0.680424 (-0.484647) | 0.022160 / 0.534201 (-0.512041) | 0.530592 / 0.579283 (-0.048691) | 0.508591 / 0.434364 (0.074227) | 0.619794 / 0.540337 (0.079457) | 0.749773 / 1.386936 (-0.637163) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012431 / 0.011353 (0.001078) | 0.006526 / 0.011008 (-0.004482) | 0.132266 / 0.038508 (0.093757) | 0.043199 / 0.023109 (0.020089) | 0.405230 / 0.275898 (0.129332) | 0.494643 / 0.323480 (0.171163) | 0.009927 / 0.007986 (0.001941) | 0.005227 / 0.004328 (0.000899) | 0.110914 / 0.004250 (0.106664) | 0.047815 / 0.037052 (0.010763) | 0.419099 / 0.258489 (0.160610) | 0.463405 / 0.293841 (0.169564) | 0.057858 / 0.128546 (-0.070688) | 0.018918 / 0.075646 (-0.056728) | 0.450584 / 0.419271 (0.031313) | 0.060457 / 0.043533 (0.016924) | 0.408234 / 0.255139 (0.153095) | 0.433722 / 0.283200 (0.150523) | 0.119403 / 0.141683 (-0.022280) | 1.966742 / 1.452155 (0.514587) | 1.980685 / 1.492716 (0.487969) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292853 / 0.018006 (0.274847) | 0.619697 / 0.000490 (0.619207) | 0.002135 / 0.000200 (0.001935) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031283 / 0.037411 (-0.006129) | 0.128649 / 0.014526 (0.114123) | 0.150116 / 0.176557 (-0.026441) | 0.187605 / 0.737135 (-0.549530) | 0.153334 / 0.296338 (-0.143005) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.659660 / 0.215209 (0.444451) | 6.459749 / 2.077655 (4.382094) | 2.764566 / 1.504120 (1.260446) | 2.362630 / 1.541195 (0.821435) | 2.426421 / 1.468490 (0.957931) | 1.282407 / 4.584777 (-3.302370) | 5.668865 / 3.745712 (1.923153) | 3.236255 / 5.269862 (-2.033606) | 2.248836 / 4.565676 (-2.316841) | 0.145861 / 0.424275 (-0.278414) | 0.015707 / 0.007607 (0.008100) | 0.805218 / 0.226044 (0.579174) | 8.146831 / 2.268929 (5.877903) | 3.506283 / 55.444624 (-51.938341) | 2.736682 / 6.876477 (-4.139795) | 2.959039 / 2.142072 (0.816967) | 1.528428 / 4.805227 (-3.276799) | 0.270980 / 6.500664 (-6.229684) | 0.086824 / 0.075469 (0.011355) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.682506 / 1.841788 (-0.159282) | 18.844103 / 8.074308 (10.769795) | 21.008471 / 10.191392 (10.817079) | 0.258372 / 0.680424 (-0.422052) | 0.046505 / 0.534201 (-0.487696) | 0.574760 / 0.579283 (-0.004523) | 0.663745 / 0.434364 (0.229381) | 0.702411 / 0.540337 (0.162074) | 0.824024 / 1.386936 (-0.562912) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010016 / 0.011353 (-0.001337) | 0.007459 / 0.011008 (-0.003549) | 0.103954 / 0.038508 (0.065446) | 0.036363 / 0.023109 (0.013254) | 0.464079 / 0.275898 (0.188181) | 0.504730 / 0.323480 (0.181250) | 0.007865 / 0.007986 (-0.000121) | 0.005210 / 0.004328 (0.000882) | 0.105018 / 0.004250 (0.100767) | 0.062191 / 0.037052 (0.025139) | 0.483304 / 0.258489 (0.224815) | 0.547030 / 0.293841 (0.253189) | 0.055436 / 0.128546 (-0.073110) | 0.021073 / 0.075646 (-0.054573) | 0.120952 / 0.419271 (-0.298319) | 0.075593 / 0.043533 (0.032060) | 0.459930 / 0.255139 (0.204791) | 0.486924 / 0.283200 (0.203724) | 0.129465 / 0.141683 (-0.012218) | 1.902322 / 1.452155 (0.450167) | 1.980809 / 1.492716 (0.488092) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.259263 / 0.018006 (0.241257) | 0.596703 / 0.000490 (0.596213) | 0.004520 / 0.000200 (0.004320) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032802 / 0.037411 (-0.004609) | 0.138751 / 0.014526 (0.124225) | 0.147106 / 0.176557 (-0.029451) | 0.194791 / 0.737135 (-0.542345) | 0.152643 / 0.296338 (-0.143696) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.678455 / 0.215209 (0.463246) | 6.673643 / 2.077655 (4.595989) | 2.943368 / 1.504120 (1.439248) | 2.591223 / 1.541195 (1.050029) | 2.741097 / 1.468490 (1.272607) | 1.261178 / 4.584777 (-3.323599) | 5.773853 / 3.745712 (2.028141) | 3.171559 / 5.269862 (-2.098303) | 2.124898 / 4.565676 (-2.440779) | 0.161849 / 0.424275 (-0.262426) | 0.015498 / 0.007607 (0.007891) | 0.857984 / 0.226044 (0.631940) | 8.456946 / 2.268929 (6.188018) | 3.818787 / 55.444624 (-51.625837) | 3.009953 / 6.876477 (-3.866523) | 3.113006 / 2.142072 (0.970934) | 1.477299 / 4.805227 (-3.327929) | 0.267207 / 6.500664 (-6.233457) | 0.087590 / 0.075469 (0.012121) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.757389 / 1.841788 (-0.084398) | 19.287690 / 8.074308 (11.213381) | 21.601991 / 10.191392 (11.410599) | 0.260464 / 0.680424 (-0.419960) | 0.028552 / 0.534201 (-0.505649) | 0.558934 / 0.579283 (-0.020349) | 0.673651 / 0.434364 (0.239287) | 0.714448 / 0.540337 (0.174111) | 0.857608 / 1.386936 (-0.529328) |\n\n</details>\n</details>\n\n\n",
"Ready for review @mariosasko, LMKWYT :)\r\n\r\nSorry it tooks me a few tries to fix the CI - I ended up not trying to use the latest `torch` version in the CI.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009474 / 0.011353 (-0.001878) | 0.005507 / 0.011008 (-0.005501) | 0.101219 / 0.038508 (0.062711) | 0.035591 / 0.023109 (0.012481) | 0.305841 / 0.275898 (0.029943) | 0.339135 / 0.323480 (0.015656) | 0.007920 / 0.007986 (-0.000066) | 0.004252 / 0.004328 (-0.000077) | 0.076912 / 0.004250 (0.072662) | 0.041923 / 0.037052 (0.004871) | 0.301405 / 0.258489 (0.042916) | 0.356488 / 0.293841 (0.062647) | 0.039342 / 0.128546 (-0.089204) | 0.012711 / 0.075646 (-0.062935) | 0.334193 / 0.419271 (-0.085079) | 0.049112 / 0.043533 (0.005579) | 0.301484 / 0.255139 (0.046345) | 0.315306 / 0.283200 (0.032106) | 0.102959 / 0.141683 (-0.038724) | 1.420677 / 1.452155 (-0.031478) | 1.549493 / 1.492716 (0.056777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284639 / 0.018006 (0.266633) | 0.501226 / 0.000490 (0.500736) | 0.004328 / 0.000200 (0.004128) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027034 / 0.037411 (-0.010377) | 0.108066 / 0.014526 (0.093540) | 0.122106 / 0.176557 (-0.054451) | 0.162908 / 0.737135 (-0.574227) | 0.127233 / 0.296338 (-0.169105) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394023 / 0.215209 (0.178813) | 3.932729 / 2.077655 (1.855075) | 1.771195 / 1.504120 (0.267075) | 1.582788 / 1.541195 (0.041594) | 1.703219 / 1.468490 (0.234728) | 0.702629 / 4.584777 (-3.882148) | 3.780187 / 3.745712 (0.034475) | 2.180433 / 5.269862 (-3.089428) | 1.504806 / 4.565676 (-3.060871) | 0.085289 / 0.424275 (-0.338986) | 0.012580 / 0.007607 (0.004973) | 0.515408 / 0.226044 (0.289363) | 5.010613 / 2.268929 (2.741685) | 2.256648 / 55.444624 (-53.187976) | 1.914971 / 6.876477 (-4.961505) | 2.038436 / 2.142072 (-0.103636) | 0.846240 / 4.805227 (-3.958987) | 0.164920 / 6.500664 (-6.335744) | 0.063899 / 0.075469 (-0.011570) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.224160 / 1.841788 (-0.617627) | 15.089995 / 8.074308 (7.015687) | 14.777003 / 10.191392 (4.585611) | 0.169873 / 0.680424 (-0.510551) | 0.029233 / 0.534201 (-0.504968) | 0.445424 / 0.579283 (-0.133859) | 0.439194 / 0.434364 (0.004830) | 0.536370 / 0.540337 (-0.003968) | 0.636694 / 1.386936 (-0.750242) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008230 / 0.011353 (-0.003122) | 0.005499 / 0.011008 (-0.005509) | 0.076108 / 0.038508 (0.037600) | 0.037444 / 0.023109 (0.014335) | 0.364420 / 0.275898 (0.088522) | 0.412308 / 0.323480 (0.088828) | 0.006704 / 0.007986 (-0.001282) | 0.004359 / 0.004328 (0.000031) | 0.075080 / 0.004250 (0.070830) | 0.057698 / 0.037052 (0.020646) | 0.366088 / 0.258489 (0.107599) | 0.409583 / 0.293841 (0.115742) | 0.037882 / 0.128546 (-0.090664) | 0.012421 / 0.075646 (-0.063225) | 0.087701 / 0.419271 (-0.331571) | 0.050669 / 0.043533 (0.007136) | 0.351139 / 0.255139 (0.096000) | 0.384340 / 0.283200 (0.101140) | 0.108097 / 0.141683 (-0.033586) | 1.445010 / 1.452155 (-0.007145) | 1.559570 / 1.492716 (0.066853) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.324114 / 0.018006 (0.306108) | 0.549134 / 0.000490 (0.548644) | 0.003544 / 0.000200 (0.003344) | 0.000097 / 0.000054 (0.000042) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030646 / 0.037411 (-0.006765) | 0.108573 / 0.014526 (0.094047) | 0.125291 / 0.176557 (-0.051266) | 0.174798 / 0.737135 (-0.562338) | 0.128000 / 0.296338 (-0.168338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.428881 / 0.215209 (0.213672) | 4.282320 / 2.077655 (2.204665) | 2.061462 / 1.504120 (0.557342) | 1.858477 / 1.541195 (0.317283) | 1.971646 / 1.468490 (0.503156) | 0.723631 / 4.584777 (-3.861146) | 3.822376 / 3.745712 (0.076664) | 2.174427 / 5.269862 (-3.095434) | 1.386066 / 4.565676 (-3.179611) | 0.088391 / 0.424275 (-0.335884) | 0.012948 / 0.007607 (0.005341) | 0.524423 / 0.226044 (0.298378) | 5.249389 / 2.268929 (2.980460) | 2.528662 / 55.444624 (-52.915962) | 2.245329 / 6.876477 (-4.631147) | 2.402733 / 2.142072 (0.260660) | 0.868864 / 4.805227 (-3.936364) | 0.174066 / 6.500664 (-6.326598) | 0.066165 / 0.075469 (-0.009304) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296922 / 1.841788 (-0.544865) | 15.814109 / 8.074308 (7.739801) | 14.086059 / 10.191392 (3.894667) | 0.190952 / 0.680424 (-0.489472) | 0.017679 / 0.534201 (-0.516522) | 0.428872 / 0.579283 (-0.150411) | 0.435399 / 0.434364 (0.001035) | 0.540856 / 0.540337 (0.000519) | 0.648904 / 1.386936 (-0.738032) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4810/comments | https://api.github.com/repos/huggingface/datasets/issues/4810/events | https://github.com/huggingface/datasets/pull/4810 | 1,333,038,702 | PR_kwDODunzps484C9l | 4,810 | Add description to hellaswag dataset | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 2 | 2022-08-09T10:21:14Z | 2022-09-23T11:35:38Z | 2022-09-23T11:33:44Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4810/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4810/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4810.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4810",
"merged_at": "2022-09-23T11:33:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4810.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4810"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Are the `metadata JSON file` not on their way to deprecation? 😆😇\r\n\r\nIMO, more generally than this particular PR, the contribution process should be simplified now that many validation checks happen on the hub side.\r\n\r\nKeeping this open in the meantime to get more potential feedback!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3237/comments | https://api.github.com/repos/huggingface/datasets/issues/3237/events | https://github.com/huggingface/datasets/issues/3237 | 1,048,165,525 | I_kwDODunzps4-ebyV | 3,237 | wikitext description wrong | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-11-09T04:06:52Z | 2022-02-14T15:45:11Z | 2021-11-09T13:49:28Z | null | ## Describe the bug
Descriptions of the wikitext datasests are wrong.
## Steps to reproduce the bug
Please see: https://github.com/huggingface/datasets/blob/f6dcafce996f39b6a4bbe3a9833287346f4a4b68/datasets/wikitext/wikitext.py#L50
## Expected results
The descriptions for raw-v1 and v1 should be switched. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3237/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3237/timeline | null | completed | null | null | false | [
"Hi @hongyuanmei, thanks for reporting.\r\n\r\nI'm fixing it.",
"Duplicate of:\r\n- #795"
] |
https://api.github.com/repos/huggingface/datasets/issues/4750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4750/comments | https://api.github.com/repos/huggingface/datasets/issues/4750/events | https://github.com/huggingface/datasets/issues/4750 | 1,319,333,645 | I_kwDODunzps5Oo28N | 4,750 | Easily create loading script for benchmark comprising multiple huggingface datasets | [] | closed | false | null | 2 | 2022-07-27T10:13:38Z | 2022-07-27T13:58:07Z | 2022-07-27T13:58:07Z | null | Hi,
I would like to create a loading script for a benchmark comprising multiple huggingface datasets.
The function _split_generators needs to return the files for the respective dataset. However, the files are not always in the same location for each dataset. I want to just make a wrapper dataset that provides a single interface to all the underlying datasets.
I thought about downloading the files with the load_dataset function and then providing the link to the cached file. But this seems a bit inelegant to me. What approach would you propose to do this?
Please let me know if you have any questions.
Cheers,
Joel | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4750/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4750/timeline | null | completed | null | null | false | [
"Hi ! I think the simplest is to copy paste the `_split_generators` code from the other datasets and do a bunch of if-else, as in the glue dataset: https://huggingface.co/datasets/glue/blob/main/glue.py#L467",
"Ok, I see. Thank you"
] |
https://api.github.com/repos/huggingface/datasets/issues/3995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3995/comments | https://api.github.com/repos/huggingface/datasets/issues/3995/events | https://github.com/huggingface/datasets/pull/3995 | 1,178,232,623 | PR_kwDODunzps404054 | 3,995 | Close `PIL.Image` file handler in `Image.decode_example` | [] | closed | false | null | 1 | 2022-03-23T14:51:48Z | 2022-03-23T18:24:52Z | 2022-03-23T18:19:27Z | null | Closes the file handler of the PIL image object in `Image.decode_example` to avoid the `Too many open files` error.
To pass [the image equality checks](https://app.circleci.com/pipelines/github/huggingface/datasets/10774/workflows/d56670e6-16bb-4c64-b601-a152c5acf5ed/jobs/65825) in CI, `Image.decode_example` calls `image.load()` regardless of how the image object is created (not only for the `PIL.Image.open(local_path)` case). This is needed because `load()` sets the `readonly` attribute of a `PIL.Image` object to 0 (it's 1 after `PIL.Image.open(file_like)`), and in the older PIL versions (only fixed on main), that attribute is considered in `PIL.Image.__eq__`. More info can be found here: https://github.com/python-pillow/Pillow/issues/5926.
Fix #3985
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3995/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3995/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3995.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3995",
"merged_at": "2022-03-23T18:19:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3995.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3995"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/755/comments | https://api.github.com/repos/huggingface/datasets/issues/755/events | https://github.com/huggingface/datasets/pull/755 | 728,203,821 | MDExOlB1bGxSZXF1ZXN0NTA4OTU0NDI2 | 755 | Start community-provided dataset docs V2 | [] | closed | false | null | 0 | 2020-10-23T13:07:30Z | 2020-10-23T13:15:37Z | 2020-10-23T13:15:37Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/755/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/755/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/755.diff",
"html_url": "https://github.com/huggingface/datasets/pull/755",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/755.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/755"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/2754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2754/comments | https://api.github.com/repos/huggingface/datasets/issues/2754/events | https://github.com/huggingface/datasets/pull/2754 | 959,105,577 | MDExOlB1bGxSZXF1ZXN0NzAyMjcxMjM4 | 2,754 | Generate metadata JSON for telugu_books dataset | [] | closed | false | null | 0 | 2021-08-03T13:14:52Z | 2021-08-04T08:49:02Z | 2021-08-04T08:49:02Z | null | Related to #2743. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2754/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2754/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2754",
"merged_at": "2021-08-04T08:49:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2754"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/636 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/636/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/636/comments | https://api.github.com/repos/huggingface/datasets/issues/636/events | https://github.com/huggingface/datasets/pull/636 | 702,883,989 | MDExOlB1bGxSZXF1ZXN0NDg4MDg3OTA5 | 636 | Consistent ner features | [] | closed | false | null | 0 | 2020-09-16T15:56:25Z | 2020-09-17T09:52:59Z | 2020-09-17T09:52:58Z | null | As discussed in #613 , this PR aims at making NER feature names consistent across datasets.
I changed the feature names of LinCE and XTREME/PAN-X | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/636/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/636/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/636.diff",
"html_url": "https://github.com/huggingface/datasets/pull/636",
"merged_at": "2020-09-17T09:52:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/636.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/636"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2358/comments | https://api.github.com/repos/huggingface/datasets/issues/2358/events | https://github.com/huggingface/datasets/pull/2358 | 891,269,577 | MDExOlB1bGxSZXF1ZXN0NjQ0MTYyOTY2 | 2,358 | Roman Urdu Stopwords List | [] | closed | false | null | 2 | 2021-05-13T18:29:27Z | 2021-05-19T08:50:43Z | 2021-05-17T14:05:10Z | null | A list of most frequently used Roman Urdu words with different spellings and usages.
This is a very basic effort to collect some basic stopwords for Roman Urdu to help efforts of analyzing text data in roman Urdu which makes up a huge part of daily internet interaction of Roman-Urdu users. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2358/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2358/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2358.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2358",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2358.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2358"
} | true | [
"Hi ! Thanks for sharing :)\r\nI think the best place to share this is probably the `Languages at Hugging Face` section of the forum:\r\nhttps://discuss.huggingface.co/c/languages-at-hugging-face/15\r\n\r\nSince this is not a dataset, I'm closing this PR if you don't mind",
"Thank you I will look into the link that you have shared with me.\n\n\n\n\nOn Mon, May 17, 2021 at 7:05 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> Closed #2358 <https://github.com/huggingface/datasets/pull/2358>.\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/2358#event-4754836267>, or\n> unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AN7SJYJVY4C5XQRDNET743DTOEPC7ANCNFSM443AZ3MA>\n> .\n>\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3278/comments | https://api.github.com/repos/huggingface/datasets/issues/3278/events | https://github.com/huggingface/datasets/pull/3278 | 1,054,249,463 | PR_kwDODunzps4uj2EQ | 3,278 | Proposed update to the documentation for WER | [] | closed | false | null | 0 | 2021-11-15T23:28:31Z | 2021-11-16T11:19:37Z | 2021-11-16T11:19:37Z | null | I wanted to submit a minor update to the description of WER for your consideration.
Because of the possibility of insertions, the numerator in the WER formula can be larger than N, so the value of WER can be greater than 1.0:
```
>>> from datasets import load_metric
>>> metric = load_metric("wer")
>>> metric.compute(predictions=["hello how are you"], references=["hello"])
3.0
```
and similarly from the underlying jiwer module's `wer` function:
```
>>> from jiwer import wer
>>> wer("hello", "hello how are you")
3.0
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3278/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3278/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3278.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3278",
"merged_at": "2021-11-16T11:19:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3278.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3278"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4491/comments | https://api.github.com/repos/huggingface/datasets/issues/4491/events | https://github.com/huggingface/datasets/issues/4491 | 1,270,803,822 | I_kwDODunzps5Lvu1u | 4,491 | Dataset Viewer issue for Pavithree/test | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2022-06-14T13:23:10Z | 2022-06-14T14:37:21Z | 2022-06-14T14:34:33Z | null | ### Link
https://huggingface.co/datasets/Pavithree/test
### Description
I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missing from my end? Kindly help.
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4491/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4491/timeline | null | completed | null | null | false | [
"This issue can be resolved according to this post https://stackoverflow.com/questions/70566660/parquet-with-null-columns-on-pyarrow. It looks like first data entry in the json file must not have any null values as pyarrow uses this first file to infer schema for entire dataset."
] |
https://api.github.com/repos/huggingface/datasets/issues/489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/489/comments | https://api.github.com/repos/huggingface/datasets/issues/489/events | https://github.com/huggingface/datasets/issues/489 | 676,456,257 | MDU6SXNzdWU2NzY0NTYyNTc= | 489 | ug | [] | closed | false | null | 2 | 2020-08-10T22:33:03Z | 2020-08-10T22:55:14Z | 2020-08-10T22:33:40Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/489/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/489/timeline | null | completed | null | null | false | [
"whoops",
"please delete this"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/3514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3514/comments | https://api.github.com/repos/huggingface/datasets/issues/3514/events | https://github.com/huggingface/datasets/pull/3514 | 1,092,606,383 | PR_kwDODunzps4weN9W | 3,514 | Fix to_tf_dataset references in docs | [] | closed | false | null | 1 | 2022-01-03T15:31:39Z | 2022-01-05T18:52:48Z | 2022-01-05T18:52:48Z | null | Fix the `to_tf_dataset` references in the docs. The currently failing example of usage will be fixed by #3338. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3514/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3514/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3514.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3514",
"merged_at": "2022-01-05T18:52:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3514.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3514"
} | true | [
"The code snippet in [this section](https://huggingface.co/docs/datasets/master/use_dataset.html?highlight=to_tf_dataset#tensorflow) is missing an import (`DataCollatorWithPadding`) and doesn't initialize the TF model before the `model.fit` call."
] |
https://api.github.com/repos/huggingface/datasets/issues/3325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3325/comments | https://api.github.com/repos/huggingface/datasets/issues/3325/events | https://github.com/huggingface/datasets/pull/3325 | 1,064,663,075 | PR_kwDODunzps4vEaGO | 3,325 | Update conda dependencies | [] | closed | false | null | 0 | 2021-11-26T16:08:07Z | 2021-11-26T16:20:37Z | 2021-11-26T16:20:36Z | null | Some dependencies minimum versions were outdated. For example `pyarrow` and `huggingface_hub` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3325/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3325/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3325.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3325",
"merged_at": "2021-11-26T16:20:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3325.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3325"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1299/comments | https://api.github.com/repos/huggingface/datasets/issues/1299/events | https://github.com/huggingface/datasets/issues/1299 | 759,414,566 | MDU6SXNzdWU3NTk0MTQ1NjY= | 1,299 | can't load "german_legal_entity_recognition" dataset | [] | closed | false | null | 3 | 2020-12-08T12:42:01Z | 2020-12-16T16:03:13Z | 2020-12-16T16:03:13Z | null | FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1299/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1299/timeline | null | completed | null | null | false | [
"Please if you could tell me more about the error? \r\n\r\n1. Please check the directory you've been working on\r\n2. Check for any typos",
"> Please if you could tell me more about the error?\r\n> \r\n> 1. Please check the directory you've been working on\r\n> 2. Check for any typos\r\n\r\nError happens during the execution of this line:\r\ndataset = load_dataset(\"german_legal_entity_recognition\")\r\n\r\nAlso, when I try to open mentioned links via Opera I have errors \"404: Not Found\" and \"This XML file does not appear to have any style information associated with it. The document tree is shown below.\" respectively.",
"Hello @nataly-obr, the `german_legal_entity_recognition` dataset has not yet been released (it is part of the coming soon v2 release).\r\n\r\nYou can still access it now if you want, but you will need to install `datasets` via the master branch:\r\n`pip install git+https://github.com/huggingface/datasets.git@master`\r\n\r\nPlease let me know if it solves the issue :) "
] |
https://api.github.com/repos/huggingface/datasets/issues/4850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4850/comments | https://api.github.com/repos/huggingface/datasets/issues/4850/events | https://github.com/huggingface/datasets/pull/4850 | 1,338,702,306 | PR_kwDODunzps49KnZ8 | 4,850 | Fix test of _get_extraction_protocol for TAR files | [] | closed | false | null | 1 | 2022-08-15T08:37:58Z | 2022-08-15T09:42:56Z | 2022-08-15T09:28:46Z | null | While working in another PR, I discovered an xpass test (a test that is supposed to xfail but nevertheless passes) when testing `_get_extraction_protocol`: https://github.com/huggingface/datasets/runs/7818845285?check_suite_focus=true
```
XPASS tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_extraction_protocol_throws[https://foo.bar/train.tar]
```
This PR:
- refactors the test so that it tests the raise of the exceptions instead of xfailing
- fixes the test for TAR files: it does not raise an exception, but returns "tar"
- fixes some tests wrongly named: exchange `test_streaming_dl_manager_get_extraction_protocol` with `test_streaming_dl_manager_get_extraction_protocol_gg_drive` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4850/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4850/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4850.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4850",
"merged_at": "2022-08-15T09:28:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4850.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4850"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2632/comments | https://api.github.com/repos/huggingface/datasets/issues/2632/events | https://github.com/huggingface/datasets/pull/2632 | 942,293,727 | MDExOlB1bGxSZXF1ZXN0Njg4MDQyMjcw | 2,632 | add image-classification task template | [] | closed | false | null | 2 | 2021-07-12T17:41:03Z | 2021-07-13T15:44:28Z | 2021-07-13T15:28:16Z | null | Snippet below is the tl;dr, but you can try it out directly here:
[](https://colab.research.google.com/gist/nateraw/005c025d41f0e48ae3d4ee61c0f20b70/image-classification-task-template-demo.ipynb)
```python
from datasets import load_dataset
ds = load_dataset('nateraw/image-folder', data_files='PetImages/')
# DatasetDict({
# train: Dataset({
# features: ['file', 'labels'],
# num_rows: 23410
# })
# })
ds = ds.prepare_for_task('image-classification')
# DatasetDict({
# train: Dataset({
# features: ['image_file_path', 'labels'],
# num_rows: 23410
# })
# })
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2632/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2632/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2632.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2632",
"merged_at": "2021-07-13T15:28:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2632.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2632"
} | true | [
"Awesome!",
"Thanks for adding a new task template - great work @nateraw 🚀 !"
] |
https://api.github.com/repos/huggingface/datasets/issues/1517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1517/comments | https://api.github.com/repos/huggingface/datasets/issues/1517/events | https://github.com/huggingface/datasets/pull/1517 | 764,045,214 | MDExOlB1bGxSZXF1ZXN0NTM4MzAyNDM1 | 1,517 | Kd conv smangrul | [] | closed | false | null | 2 | 2020-12-12T16:51:30Z | 2020-12-16T14:56:14Z | 2020-12-16T14:56:14Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1517/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1517/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1517.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1517",
"merged_at": "2020-12-16T14:56:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1517.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1517"
} | true | [
"Hii please follow me",
"merging since the CI is fixed on master"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/1303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1303/comments | https://api.github.com/repos/huggingface/datasets/issues/1303/events | https://github.com/huggingface/datasets/pull/1303 | 759,440,484 | MDExOlB1bGxSZXF1ZXN0NTM0NDQ2NDg0 | 1,303 | adding opus_openoffice | [] | closed | false | null | 0 | 2020-12-08T13:20:21Z | 2020-12-10T09:37:10Z | 2020-12-10T09:37:10Z | null | Adding Opus OpenOffice: http://opus.nlpl.eu/OpenOffice.php
8 languages, 28 bitexts | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1303/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1303/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1303.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1303",
"merged_at": "2020-12-10T09:37:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1303.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1303"
} | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.