url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4127/comments | https://api.github.com/repos/huggingface/datasets/issues/4127/events | https://github.com/huggingface/datasets/pull/4127 | 1,197,297,756 | PR_kwDODunzps4132EN | 4,127 | Add configs with processed data in medical_dialog dataset | [] | closed | false | null | 1 | 2022-04-08T13:08:16Z | 2022-05-06T08:39:50Z | 2022-04-08T16:20:51Z | null | There exist processed data files that do not require parsing the raw data files (which can take long time).
Fix #4122. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4127/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4127.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4127",
"merged_at": "2022-04-08T16:20:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4127.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4127"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3450/comments | https://api.github.com/repos/huggingface/datasets/issues/3450/events | https://github.com/huggingface/datasets/issues/3450 | 1,083,450,158 | I_kwDODunzps5AlCMu | 3,450 | Unexpected behavior doing Split + Filter | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-12-17T17:00:39Z | 2023-07-25T15:38:47Z | 2023-07-25T15:38:47Z | null | ## Describe the bug
I observed unexpected behavior when applying 'train_test_split' followed by 'filter' on dataset. Elements of the training dataset eventually end up in the test dataset (after applying the 'filter')
## Steps to reproduce the bug
```
from datasets import Dataset
import pandas as pd
dic = {'x': [1,2,3,4,5,6,7,8,9], 'y':['q','w','e','r','t','y','u','i','o']}
df = pd.DataFrame.from_dict(dic)
dataset = Dataset.from_pandas(df)
split_dataset = dataset.train_test_split(test_size=0.5, shuffle=False, seed=42)
train_dataset = split_dataset["train"]
eval_dataset = split_dataset["test"]
eval_dataset_2 = eval_dataset.filter(lambda example: example['x'] % 2 == 0)
print( eval_dataset['x'])
print(eval_dataset_2['x'])
```
One observes that elements in eval_dataset2 are actually coming from the training dataset...
## Expected results
The expected results would be that the filtered eval dataset would only contain elements from the original eval dataset.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows 10
- Python version: 3.7
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3450/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3450/timeline | null | completed | null | null | false | [
"Hi ! This is an issue with `datasets` 1.12. Sorry for the inconvenience. Can you update to `>=1.13` ?\r\nsee https://github.com/huggingface/datasets/issues/3190\r\n\r\nMaybe we should also backport the bug fix to `1.12` (in a new version `1.12.2`)"
] |
https://api.github.com/repos/huggingface/datasets/issues/353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/353/comments | https://api.github.com/repos/huggingface/datasets/issues/353/events | https://github.com/huggingface/datasets/issues/353 | 653,250,611 | MDU6SXNzdWU2NTMyNTA2MTE= | 353 | [Dataset requests] New datasets for Text Classification | [
{
"color": "008672",
"default": true,
"description": "Extra attention is needed",
"id": 1935892884,
"name": "help wanted",
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted"
},
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 8 | 2020-07-08T12:17:58Z | 2022-08-04T12:08:47Z | null | null | We are missing a few datasets for Text Classification which is an important field.
Namely, it would be really nice to add:
- [x] TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]**
- #386
- [x] Yelp-5
- #1315
- [x] Movie review (Movie Review (MR) dataset [156]) **[done (same as rotten_tomatoes)]**
- [x] SST (Stanford Sentiment Treebank) **[include in glue]**
- #1934
- [ ] Multi-Perspective Question Answering (MPQA) dataset **[require authentication (indeed manual download)]**
- [x] Amazon. This is a popular corpus of product reviews collected from the Amazon website [159]. It contains labels for both binary classification and multi-class (5-class) classification
- #791
- #1389
- [x] 20 Newsgroups. The 20 Newsgroups dataset **[done]**
- #410
- [x] Sogou News dataset **[done]**
- #450
- [x] Reuters news. The Reuters-21578 dataset [165] **[done]**
- #471
- [x] DBpedia. The DBpedia dataset [170]
- #1116
- [ ] Ohsumed. The Ohsumed collection [171] is a subset of the MEDLINE database
- [ ] EUR-Lex. The EUR-Lex dataset
- [x] WOS. The Web Of Science (WOS) dataset **[done]**
- #424
- [ ] PubMed. PubMed [173]
- [x] TREC-QA: TREC-6 + TREC-50
- See above: TREC-6 dataset
- [x] Quora. The Quora dataset [180]
- #366
All these datasets are cited in https://arxiv.org/abs/2004.03705 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/353/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/353/timeline | null | null | null | null | false | [
"Pinging @mariamabarham as well",
"- `nlp` has MR! It's called `rotten_tomatoes`\r\n- SST is part of GLUE, or is that just SST-2?\r\n- `nlp` also has `ag_news`, a popular news classification dataset\r\n\r\nI'd also like to see:\r\n- the Yahoo Answers topic classification dataset\r\n- the Kaggle Fake News classification dataset",
"Thanks @jxmorris12 for pointing this out. \r\n\r\nIn glue we only have SST-2 maybe we can add separately SST-1.\r\n",
"This is the homepage for the Amazon dataset: https://www.kaggle.com/datafiniti/consumer-reviews-of-amazon-products\r\n\r\nIs there an easy way to download kaggle datasets programmatically? If so, I can add this one!",
"Hi @jxmorris12 for now I think our `dl_manager` does not download from Kaggle.\r\n@thomwolf , @lhoestq",
"Pretty sure the quora dataset is the same one I implemented here: https://github.com/huggingface/nlp/pull/366",
"Great list. Any idea if Amazon Reviews has been added?\r\n\r\n- ~40 GB of text (sadly no emoji)\r\n- popular MLM pre-training dataset before bigger datasets like WebText https://arxiv.org/abs/1808.01371\r\n- turns out that binarizing the 1-5 star rating leads to great Pos/Neg/Neutral dataset, T5 paper claims to get very high accuracy (98%!) on this with small amount of finetuning https://arxiv.org/abs/2004.14546\r\n\r\nApologies if it's been included (great to see where) and if not, it's one of the better medium/large NLP dataset for semi-supervised learning, albeit a bit out of date. \r\n\r\nThanks!! \r\n\r\ncc @sshleifer ",
"On the Amazon Reviews dataset, the original UCSD website has noted these are now updated to include product reviews through 2018 -- actually quite recent compared to many other datasets. Almost certainly the largest NLP dataset out there with labels!\r\nhttps://jmcauley.ucsd.edu/data/amazon/ \r\n\r\nAny chance someone has time to onboard this dataset in a HF way?\r\n\r\ncc @sshleifer "
] |
https://api.github.com/repos/huggingface/datasets/issues/2010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2010/comments | https://api.github.com/repos/huggingface/datasets/issues/2010/events | https://github.com/huggingface/datasets/issues/2010 | 825,567,635 | MDU6SXNzdWU4MjU1Njc2MzU= | 2,010 | Local testing fails | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-03-09T09:01:38Z | 2021-03-09T14:06:03Z | 2021-03-09T14:06:03Z | null | I'm following the CI setup as described in
https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19
in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4
and getting
```
FAILED tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function - TypeError: an integer is required (got type bytes)
1 failed, 2321 passed, 5109 skipped, 10 warnings in 124.32s (0:02:04)
```
Seems like a discrepancy with CI, perhaps a lib version that's not controlled?
Tried with `pyarrow=={1.0.0,0.17.1,2.0.0}` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2010/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2010/timeline | null | completed | null | null | false | [
"I'm not able to reproduce on my side.\r\nCan you provide the full stacktrace please ?\r\nWhat version of `python` and `dill` do you have ? Which OS are you using ?",
"```\r\nco_filename = '<ipython-input-2-e0383a102aae>', returned_obj = [0]\r\n \r\n def create_ipython_func(co_filename, returned_obj):\r\n def func():\r\n return returned_obj\r\n \r\n code = func.__code__\r\n> code = CodeType(*[getattr(code, k) if k != \"co_filename\" else co_filename for k in code_args])\r\nE TypeError: an integer is required (got type bytes)\r\n\r\ntests/test_caching.py:152: TypeError\r\n```\r\n\r\nPython 3.8.8 \r\ndill==0.3.1.1\r\n",
"I managed to reproduce. This comes from the CodeType init signature that is different in python 3.8.8\r\nI opened a PR to fix this test\r\nThanks !"
] |
https://api.github.com/repos/huggingface/datasets/issues/3985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3985/comments | https://api.github.com/repos/huggingface/datasets/issues/3985/events | https://github.com/huggingface/datasets/issues/3985 | 1,175,982,937 | I_kwDODunzps5GGBNZ | 3,985 | [image feature] Too many files open error when image feature is returned as a path | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-03-21T21:54:05Z | 2022-03-23T18:19:27Z | 2022-03-23T18:19:27Z | null | ## Describe the bug
PR in context: #3967. If I load the dataset in this PR (TextVQA), and do a simple list comprehension on the dataset, I get `Too many open files error`. This is happening due to the way we are loading the image feature when a str path is returned from the `_generate_examples`. Specifically at https://github.com/huggingface/datasets/blob/508eb4ab5d52f590baa677b4f64b1cc069139f7b/src/datasets/features/image.py#L110, we are open the file handle to the image but never closing it. This in my understanding is causing the issue.
## Steps to reproduce the bug
Pull the PR locally and run the following code
```python
from datasets import load_dataset
dataset = load_dataset("./datasets/textvqa")["train"]
data = [item for item in dataset]
# Error happens
```
## Expected results
List comprehension should work smoothly
## Actual results
`Too many open files error`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.1.dev0
- Platform: macOS-12.2-arm64-arm-64bit
- Python version: 3.10.0
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3985/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3985/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5861/comments | https://api.github.com/repos/huggingface/datasets/issues/5861/events | https://github.com/huggingface/datasets/pull/5861 | 1,709,807,340 | PR_kwDODunzps5Qf55q | 5,861 | Better error message when combining dataset dicts instead of datasets | [] | closed | false | null | 7 | 2023-05-15T10:36:24Z | 2023-05-23T10:40:13Z | 2023-05-23T10:32:58Z | null | close https://github.com/huggingface/datasets/issues/5851 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5861/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5861/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5861.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5861",
"merged_at": "2023-05-23T10:32:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5861.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5861"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007167 / 0.011353 (-0.004185) | 0.004914 / 0.011008 (-0.006094) | 0.096858 / 0.038508 (0.058350) | 0.033468 / 0.023109 (0.010359) | 0.297276 / 0.275898 (0.021378) | 0.344289 / 0.323480 (0.020809) | 0.005703 / 0.007986 (-0.002282) | 0.003972 / 0.004328 (-0.000357) | 0.075191 / 0.004250 (0.070940) | 0.046247 / 0.037052 (0.009194) | 0.317857 / 0.258489 (0.059368) | 0.347263 / 0.293841 (0.053422) | 0.035017 / 0.128546 (-0.093529) | 0.012036 / 0.075646 (-0.063611) | 0.332522 / 0.419271 (-0.086750) | 0.050188 / 0.043533 (0.006655) | 0.296627 / 0.255139 (0.041488) | 0.319196 / 0.283200 (0.035997) | 0.101100 / 0.141683 (-0.040583) | 1.484536 / 1.452155 (0.032382) | 1.606364 / 1.492716 (0.113648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203954 / 0.018006 (0.185948) | 0.436505 / 0.000490 (0.436015) | 0.003853 / 0.000200 (0.003654) | 0.000079 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025834 / 0.037411 (-0.011578) | 0.105759 / 0.014526 (0.091233) | 0.114289 / 0.176557 (-0.062268) | 0.174388 / 0.737135 (-0.562748) | 0.122248 / 0.296338 (-0.174090) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404218 / 0.215209 (0.189009) | 4.027900 / 2.077655 (1.950245) | 1.854757 / 1.504120 (0.350637) | 1.668882 / 1.541195 (0.127687) | 1.731451 / 1.468490 (0.262961) | 0.707843 / 4.584777 (-3.876934) | 3.756386 / 3.745712 (0.010674) | 2.067751 / 5.269862 (-3.202110) | 1.313039 / 4.565676 (-3.252638) | 0.086442 / 0.424275 (-0.337833) | 0.012329 / 0.007607 (0.004722) | 0.505964 / 0.226044 (0.279919) | 5.050788 / 2.268929 (2.781860) | 2.353936 / 55.444624 (-53.090688) | 2.055560 / 6.876477 (-4.820917) | 2.162948 / 2.142072 (0.020876) | 0.850532 / 4.805227 (-3.954696) | 0.168560 / 6.500664 (-6.332104) | 0.063143 / 0.075469 (-0.012326) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.182723 / 1.841788 (-0.659065) | 14.779342 / 8.074308 (6.705034) | 14.461572 / 10.191392 (4.270180) | 0.163120 / 0.680424 (-0.517303) | 0.017978 / 0.534201 (-0.516223) | 0.419168 / 0.579283 (-0.160115) | 0.420955 / 0.434364 (-0.013409) | 0.509710 / 0.540337 (-0.030628) | 0.619586 / 1.386936 (-0.767350) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006804 / 0.011353 (-0.004549) | 0.005136 / 0.011008 (-0.005872) | 0.074910 / 0.038508 (0.036402) | 0.032552 / 0.023109 (0.009443) | 0.374998 / 0.275898 (0.099100) | 0.399219 / 0.323480 (0.075739) | 0.005615 / 0.007986 (-0.002371) | 0.004118 / 0.004328 (-0.000210) | 0.074219 / 0.004250 (0.069969) | 0.045924 / 0.037052 (0.008871) | 0.383228 / 0.258489 (0.124739) | 0.407195 / 0.293841 (0.113354) | 0.035460 / 0.128546 (-0.093086) | 0.012460 / 0.075646 (-0.063187) | 0.087077 / 0.419271 (-0.332195) | 0.050507 / 0.043533 (0.006974) | 0.369001 / 0.255139 (0.113862) | 0.385761 / 0.283200 (0.102561) | 0.106999 / 0.141683 (-0.034684) | 1.465456 / 1.452155 (0.013302) | 1.556962 / 1.492716 (0.064246) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214926 / 0.018006 (0.196920) | 0.436893 / 0.000490 (0.436403) | 0.003388 / 0.000200 (0.003188) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029919 / 0.037411 (-0.007492) | 0.110859 / 0.014526 (0.096333) | 0.120617 / 0.176557 (-0.055939) | 0.171781 / 0.737135 (-0.565355) | 0.125627 / 0.296338 (-0.170712) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436024 / 0.215209 (0.220815) | 4.359167 / 2.077655 (2.281512) | 2.188399 / 1.504120 (0.684279) | 2.001196 / 1.541195 (0.460001) | 2.023710 / 1.468490 (0.555220) | 0.713799 / 4.584777 (-3.870978) | 3.832217 / 3.745712 (0.086504) | 3.269351 / 5.269862 (-2.000510) | 1.534608 / 4.565676 (-3.031068) | 0.088505 / 0.424275 (-0.335770) | 0.012345 / 0.007607 (0.004738) | 0.542446 / 0.226044 (0.316401) | 5.377757 / 2.268929 (3.108828) | 2.659837 / 55.444624 (-52.784787) | 2.272356 / 6.876477 (-4.604120) | 2.297289 / 2.142072 (0.155217) | 0.855276 / 4.805227 (-3.949952) | 0.170666 / 6.500664 (-6.329998) | 0.064549 / 0.075469 (-0.010920) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255938 / 1.841788 (-0.585850) | 15.151471 / 8.074308 (7.077163) | 12.905762 / 10.191392 (2.714370) | 0.162425 / 0.680424 (-0.517999) | 0.017504 / 0.534201 (-0.516697) | 0.448671 / 0.579283 (-0.130612) | 0.422424 / 0.434364 (-0.011940) | 0.551772 / 0.540337 (0.011434) | 0.649115 / 1.386936 (-0.737821) |\n\n</details>\n</details>\n\n\n",
"Having those different checks helps providing an appropriate error message.\r\n\r\nIf the input is a dict, we suggest to select a split. If the input lists is a mix of iterable and non-iterable, we mention that it must be one or the other.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006559 / 0.011353 (-0.004794) | 0.004569 / 0.011008 (-0.006439) | 0.104503 / 0.038508 (0.065995) | 0.028220 / 0.023109 (0.005111) | 0.365507 / 0.275898 (0.089609) | 0.400238 / 0.323480 (0.076758) | 0.004968 / 0.007986 (-0.003017) | 0.003271 / 0.004328 (-0.001057) | 0.082804 / 0.004250 (0.078554) | 0.036299 / 0.037052 (-0.000754) | 0.361201 / 0.258489 (0.102712) | 0.410962 / 0.293841 (0.117121) | 0.030423 / 0.128546 (-0.098123) | 0.011612 / 0.075646 (-0.064034) | 0.331820 / 0.419271 (-0.087452) | 0.043822 / 0.043533 (0.000289) | 0.356242 / 0.255139 (0.101103) | 0.393035 / 0.283200 (0.109836) | 0.088426 / 0.141683 (-0.053257) | 1.484139 / 1.452155 (0.031984) | 1.566712 / 1.492716 (0.073995) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195887 / 0.018006 (0.177880) | 0.402720 / 0.000490 (0.402231) | 0.003516 / 0.000200 (0.003316) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023270 / 0.037411 (-0.014141) | 0.095834 / 0.014526 (0.081308) | 0.102924 / 0.176557 (-0.073632) | 0.161397 / 0.737135 (-0.575738) | 0.105225 / 0.296338 (-0.191114) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.451701 / 0.215209 (0.236491) | 4.495171 / 2.077655 (2.417517) | 2.223203 / 1.504120 (0.719083) | 2.035533 / 1.541195 (0.494338) | 2.076182 / 1.468490 (0.607692) | 0.697317 / 4.584777 (-3.887460) | 3.406309 / 3.745712 (-0.339403) | 1.847179 / 5.269862 (-3.422683) | 1.158762 / 4.565676 (-3.406914) | 0.083067 / 0.424275 (-0.341208) | 0.012453 / 0.007607 (0.004846) | 0.546502 / 0.226044 (0.320458) | 5.455712 / 2.268929 (3.186784) | 2.654142 / 55.444624 (-52.790483) | 2.298722 / 6.876477 (-4.577755) | 2.383467 / 2.142072 (0.241395) | 0.805950 / 4.805227 (-3.999278) | 0.152479 / 6.500664 (-6.348185) | 0.066784 / 0.075469 (-0.008685) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.239129 / 1.841788 (-0.602659) | 13.603707 / 8.074308 (5.529398) | 14.062004 / 10.191392 (3.870612) | 0.130928 / 0.680424 (-0.549495) | 0.016907 / 0.534201 (-0.517294) | 0.381614 / 0.579283 (-0.197670) | 0.386770 / 0.434364 (-0.047594) | 0.455792 / 0.540337 (-0.084545) | 0.526092 / 1.386936 (-0.860844) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006202 / 0.011353 (-0.005151) | 0.004478 / 0.011008 (-0.006531) | 0.076492 / 0.038508 (0.037984) | 0.026703 / 0.023109 (0.003594) | 0.355134 / 0.275898 (0.079236) | 0.391207 / 0.323480 (0.067727) | 0.004852 / 0.007986 (-0.003133) | 0.003271 / 0.004328 (-0.001057) | 0.075080 / 0.004250 (0.070830) | 0.038803 / 0.037052 (0.001750) | 0.359530 / 0.258489 (0.101041) | 0.409044 / 0.293841 (0.115203) | 0.030366 / 0.128546 (-0.098180) | 0.011544 / 0.075646 (-0.064102) | 0.084849 / 0.419271 (-0.334423) | 0.040076 / 0.043533 (-0.003457) | 0.357359 / 0.255139 (0.102220) | 0.384075 / 0.283200 (0.100875) | 0.089130 / 0.141683 (-0.052552) | 1.520400 / 1.452155 (0.068246) | 1.604403 / 1.492716 (0.111687) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257127 / 0.018006 (0.239121) | 0.403691 / 0.000490 (0.403202) | 0.006894 / 0.000200 (0.006694) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024653 / 0.037411 (-0.012758) | 0.098834 / 0.014526 (0.084309) | 0.107276 / 0.176557 (-0.069281) | 0.158256 / 0.737135 (-0.578879) | 0.111339 / 0.296338 (-0.184999) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.445006 / 0.215209 (0.229797) | 4.452953 / 2.077655 (2.375299) | 2.168291 / 1.504120 (0.664171) | 1.969457 / 1.541195 (0.428262) | 2.003505 / 1.468490 (0.535015) | 0.695857 / 4.584777 (-3.888920) | 3.433424 / 3.745712 (-0.312288) | 2.466977 / 5.269862 (-2.802885) | 1.528167 / 4.565676 (-3.037509) | 0.082425 / 0.424275 (-0.341850) | 0.012470 / 0.007607 (0.004863) | 0.559039 / 0.226044 (0.332995) | 5.609496 / 2.268929 (3.340568) | 2.602898 / 55.444624 (-52.841726) | 2.273971 / 6.876477 (-4.602506) | 2.303370 / 2.142072 (0.161298) | 0.803875 / 4.805227 (-4.001352) | 0.151069 / 6.500664 (-6.349595) | 0.067956 / 0.075469 (-0.007513) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.334443 / 1.841788 (-0.507345) | 13.773252 / 8.074308 (5.698944) | 13.007042 / 10.191392 (2.815650) | 0.127939 / 0.680424 (-0.552485) | 0.016412 / 0.534201 (-0.517789) | 0.374744 / 0.579283 (-0.204539) | 0.396912 / 0.434364 (-0.037452) | 0.443197 / 0.540337 (-0.097140) | 0.528338 / 1.386936 (-0.858598) |\n\n</details>\n</details>\n\n\n",
"Just modified it to use only one loop. I think I managed to keep it readable as well",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007382 / 0.011353 (-0.003971) | 0.005143 / 0.011008 (-0.005865) | 0.097635 / 0.038508 (0.059127) | 0.034726 / 0.023109 (0.011616) | 0.315556 / 0.275898 (0.039658) | 0.355951 / 0.323480 (0.032472) | 0.006055 / 0.007986 (-0.001931) | 0.004264 / 0.004328 (-0.000065) | 0.073636 / 0.004250 (0.069386) | 0.050480 / 0.037052 (0.013428) | 0.316031 / 0.258489 (0.057542) | 0.363933 / 0.293841 (0.070092) | 0.035138 / 0.128546 (-0.093408) | 0.012407 / 0.075646 (-0.063239) | 0.333677 / 0.419271 (-0.085595) | 0.050586 / 0.043533 (0.007053) | 0.309507 / 0.255139 (0.054369) | 0.327043 / 0.283200 (0.043844) | 0.108975 / 0.141683 (-0.032708) | 1.447778 / 1.452155 (-0.004377) | 1.519971 / 1.492716 (0.027255) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248770 / 0.018006 (0.230764) | 0.603036 / 0.000490 (0.602546) | 0.000383 / 0.000200 (0.000183) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027094 / 0.037411 (-0.010317) | 0.104427 / 0.014526 (0.089901) | 0.120627 / 0.176557 (-0.055929) | 0.178790 / 0.737135 (-0.558346) | 0.124877 / 0.296338 (-0.171461) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.414442 / 0.215209 (0.199233) | 4.138009 / 2.077655 (2.060355) | 1.964642 / 1.504120 (0.460523) | 1.775940 / 1.541195 (0.234745) | 1.899719 / 1.468490 (0.431228) | 0.695406 / 4.584777 (-3.889371) | 3.760470 / 3.745712 (0.014758) | 3.906958 / 5.269862 (-1.362904) | 2.028164 / 4.565676 (-2.537513) | 0.086704 / 0.424275 (-0.337571) | 0.012465 / 0.007607 (0.004857) | 0.512336 / 0.226044 (0.286292) | 5.108587 / 2.268929 (2.839659) | 2.435273 / 55.444624 (-53.009352) | 2.142387 / 6.876477 (-4.734090) | 2.258234 / 2.142072 (0.116162) | 0.854035 / 4.805227 (-3.951193) | 0.170443 / 6.500664 (-6.330222) | 0.065762 / 0.075469 (-0.009707) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187529 / 1.841788 (-0.654259) | 15.151164 / 8.074308 (7.076856) | 14.577545 / 10.191392 (4.386153) | 0.166973 / 0.680424 (-0.513450) | 0.017883 / 0.534201 (-0.516318) | 0.427607 / 0.579283 (-0.151676) | 0.417050 / 0.434364 (-0.017314) | 0.508116 / 0.540337 (-0.032221) | 0.590173 / 1.386936 (-0.796763) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007499 / 0.011353 (-0.003854) | 0.005195 / 0.011008 (-0.005813) | 0.073600 / 0.038508 (0.035091) | 0.033574 / 0.023109 (0.010464) | 0.377506 / 0.275898 (0.101608) | 0.432752 / 0.323480 (0.109272) | 0.006042 / 0.007986 (-0.001944) | 0.006427 / 0.004328 (0.002098) | 0.071666 / 0.004250 (0.067416) | 0.053243 / 0.037052 (0.016190) | 0.363972 / 0.258489 (0.105483) | 0.454988 / 0.293841 (0.161147) | 0.035118 / 0.128546 (-0.093428) | 0.012395 / 0.075646 (-0.063251) | 0.084308 / 0.419271 (-0.334963) | 0.048589 / 0.043533 (0.005057) | 0.368036 / 0.255139 (0.112897) | 0.399414 / 0.283200 (0.116215) | 0.109043 / 0.141683 (-0.032640) | 1.462972 / 1.452155 (0.010817) | 1.574443 / 1.492716 (0.081726) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.215107 / 0.018006 (0.197101) | 0.550255 / 0.000490 (0.549765) | 0.004630 / 0.000200 (0.004430) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029948 / 0.037411 (-0.007463) | 0.111866 / 0.014526 (0.097340) | 0.126559 / 0.176557 (-0.049997) | 0.181443 / 0.737135 (-0.555693) | 0.130559 / 0.296338 (-0.165779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.441410 / 0.215209 (0.226201) | 4.403406 / 2.077655 (2.325752) | 2.180276 / 1.504120 (0.676156) | 2.003729 / 1.541195 (0.462534) | 2.079394 / 1.468490 (0.610904) | 0.706061 / 4.584777 (-3.878716) | 3.805668 / 3.745712 (0.059956) | 3.864941 / 5.269862 (-1.404921) | 1.970468 / 4.565676 (-2.595208) | 0.086033 / 0.424275 (-0.338242) | 0.012261 / 0.007607 (0.004654) | 0.550427 / 0.226044 (0.324383) | 5.542270 / 2.268929 (3.273342) | 2.717047 / 55.444624 (-52.727577) | 2.449022 / 6.876477 (-4.427455) | 2.549567 / 2.142072 (0.407495) | 0.854981 / 4.805227 (-3.950247) | 0.169756 / 6.500664 (-6.330908) | 0.067082 / 0.075469 (-0.008387) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281369 / 1.841788 (-0.560419) | 15.445090 / 8.074308 (7.370781) | 13.205652 / 10.191392 (3.014260) | 0.170070 / 0.680424 (-0.510354) | 0.017815 / 0.534201 (-0.516385) | 0.425193 / 0.579283 (-0.154090) | 0.425205 / 0.434364 (-0.009159) | 0.493561 / 0.540337 (-0.046776) | 0.588994 / 1.386936 (-0.797942) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006345 / 0.011353 (-0.005008) | 0.004330 / 0.011008 (-0.006678) | 0.096327 / 0.038508 (0.057819) | 0.032964 / 0.023109 (0.009855) | 0.335600 / 0.275898 (0.059702) | 0.365635 / 0.323480 (0.042155) | 0.005435 / 0.007986 (-0.002551) | 0.005005 / 0.004328 (0.000677) | 0.071107 / 0.004250 (0.066856) | 0.044363 / 0.037052 (0.007311) | 0.339988 / 0.258489 (0.081498) | 0.375575 / 0.293841 (0.081734) | 0.028343 / 0.128546 (-0.100203) | 0.008587 / 0.075646 (-0.067059) | 0.324349 / 0.419271 (-0.094922) | 0.050105 / 0.043533 (0.006573) | 0.327398 / 0.255139 (0.072259) | 0.348479 / 0.283200 (0.065279) | 0.102357 / 0.141683 (-0.039326) | 1.419905 / 1.452155 (-0.032250) | 1.534887 / 1.492716 (0.042171) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212418 / 0.018006 (0.194412) | 0.433183 / 0.000490 (0.432693) | 0.000595 / 0.000200 (0.000395) | 0.000062 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027520 / 0.037411 (-0.009891) | 0.109503 / 0.014526 (0.094977) | 0.118202 / 0.176557 (-0.058355) | 0.177236 / 0.737135 (-0.559899) | 0.123736 / 0.296338 (-0.172602) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.405734 / 0.215209 (0.190525) | 4.039566 / 2.077655 (1.961911) | 1.838211 / 1.504120 (0.334091) | 1.652650 / 1.541195 (0.111456) | 1.753488 / 1.468490 (0.284998) | 0.525258 / 4.584777 (-4.059519) | 3.704509 / 3.745712 (-0.041203) | 1.826794 / 5.269862 (-3.443067) | 1.236361 / 4.565676 (-3.329315) | 0.065619 / 0.424275 (-0.358656) | 0.011606 / 0.007607 (0.003999) | 0.505954 / 0.226044 (0.279910) | 5.054140 / 2.268929 (2.785211) | 2.352587 / 55.444624 (-53.092037) | 2.050601 / 6.876477 (-4.825875) | 2.097222 / 2.142072 (-0.044850) | 0.641044 / 4.805227 (-4.164183) | 0.140676 / 6.500664 (-6.359988) | 0.063217 / 0.075469 (-0.012253) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.177750 / 1.841788 (-0.664038) | 14.819346 / 8.074308 (6.745038) | 14.085937 / 10.191392 (3.894545) | 0.168618 / 0.680424 (-0.511806) | 0.017189 / 0.534201 (-0.517011) | 0.393415 / 0.579283 (-0.185868) | 0.422879 / 0.434364 (-0.011485) | 0.477289 / 0.540337 (-0.063048) | 0.569078 / 1.386936 (-0.817858) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006502 / 0.011353 (-0.004850) | 0.004640 / 0.011008 (-0.006368) | 0.073272 / 0.038508 (0.034764) | 0.033225 / 0.023109 (0.010116) | 0.359165 / 0.275898 (0.083267) | 0.391659 / 0.323480 (0.068179) | 0.005684 / 0.007986 (-0.002302) | 0.004045 / 0.004328 (-0.000284) | 0.072880 / 0.004250 (0.068629) | 0.046260 / 0.037052 (0.009208) | 0.361772 / 0.258489 (0.103283) | 0.402905 / 0.293841 (0.109064) | 0.027732 / 0.128546 (-0.100814) | 0.008864 / 0.075646 (-0.066783) | 0.081961 / 0.419271 (-0.337310) | 0.046170 / 0.043533 (0.002637) | 0.364198 / 0.255139 (0.109059) | 0.387468 / 0.283200 (0.104269) | 0.105456 / 0.141683 (-0.036227) | 1.457176 / 1.452155 (0.005021) | 1.564899 / 1.492716 (0.072183) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.179129 / 0.018006 (0.161123) | 0.439699 / 0.000490 (0.439209) | 0.002882 / 0.000200 (0.002682) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029123 / 0.037411 (-0.008288) | 0.112046 / 0.014526 (0.097520) | 0.122773 / 0.176557 (-0.053784) | 0.178404 / 0.737135 (-0.558732) | 0.127904 / 0.296338 (-0.168434) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440413 / 0.215209 (0.225204) | 4.407334 / 2.077655 (2.329680) | 2.112932 / 1.504120 (0.608812) | 1.911034 / 1.541195 (0.369840) | 2.057168 / 1.468490 (0.588677) | 0.525472 / 4.584777 (-4.059305) | 3.738894 / 3.745712 (-0.006818) | 1.807592 / 5.269862 (-3.462270) | 1.053837 / 4.565676 (-3.511839) | 0.066203 / 0.424275 (-0.358072) | 0.011965 / 0.007607 (0.004358) | 0.541137 / 0.226044 (0.315093) | 5.415040 / 2.268929 (3.146112) | 2.580476 / 55.444624 (-52.864148) | 2.234144 / 6.876477 (-4.642333) | 2.306014 / 2.142072 (0.163942) | 0.644221 / 4.805227 (-4.161006) | 0.142870 / 6.500664 (-6.357794) | 0.065015 / 0.075469 (-0.010454) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303465 / 1.841788 (-0.538323) | 14.949683 / 8.074308 (6.875375) | 14.370871 / 10.191392 (4.179478) | 0.142714 / 0.680424 (-0.537710) | 0.017372 / 0.534201 (-0.516829) | 0.403898 / 0.579283 (-0.175385) | 0.424781 / 0.434364 (-0.009583) | 0.465984 / 0.540337 (-0.074353) | 0.570863 / 1.386936 (-0.816074) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5881/comments | https://api.github.com/repos/huggingface/datasets/issues/5881/events | https://github.com/huggingface/datasets/issues/5881 | 1,719,402,643 | I_kwDODunzps5mfACT | 5,881 | Split dataset by node: index error when sharding iterable dataset | [] | open | false | null | 1 | 2023-05-22T10:36:13Z | 2023-05-23T08:32:14Z | null | null | ### Describe the bug
Context: we're splitting an iterable dataset by node and then passing it to a torch data loader with multiple workers
When we iterate over it for 5 steps, we don't get an error
When we instead iterate over it for 8 steps, we get an `IndexError` when fetching the data if we have too many workers
### Steps to reproduce the bug
Here, we have 2 JAX processes (`jax.process_count() = 2`) which we split the dataset over. The dataset loading script can be found here: https://huggingface.co/datasets/distil-whisper/librispeech_asr/blob/c6a1e805cbfeed5057400ac5937327d7e30281b8/librispeech_asr.py#L310
<details>
<summary> Code to reproduce </summary>
```python
from datasets import load_dataset
import jax
from datasets.distributed import split_dataset_by_node
from torch.utils.data import DataLoader
from tqdm import tqdm
# load an example dataset (https://huggingface.co/datasets/distil-whisper/librispeech_asr)
dataset = load_dataset("distil-whisper/librispeech_asr", "all", split="train.clean.100", streaming=True)
# just keep the text column -> no need to define a collator
dataset_text = dataset.remove_columns(set(dataset.features.keys()) - {"text"})
# define some constants
batch_size = 256
num_examples = 5 # works for 5 examples, doesn't for 8
num_workers = dataset_text.n_shards
# try with multiple workers
dataloader = DataLoader(dataset_text, batch_size=batch_size, num_workers=num_workers, drop_last=True)
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Multiple workers"):
if i == num_examples:
break
# try splitting by node (we can't do this with `dataset_text` since `split_dataset_by_node` expects the Audio column for an ASR dataset)
dataset = split_dataset_by_node(dataset, rank=jax.process_index(), world_size=jax.process_count())
# remove the text column again
dataset_text = dataset.remove_columns(set(dataset.features.keys()) - {"text"})
dataloader = DataLoader(dataset_text, batch_size=16, num_workers=num_workers // 2, drop_last=True)
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Split by node"):
if i == num_examples:
break
# too many workers
dataloader = DataLoader(dataset_text, batch_size=256, num_workers=num_workers, drop_last=True)
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Too many workers"):
if i == num_examples:
break
```
</details>
<details>
<summary> With 5 examples: </summary>
```
Multiple workers: 100%|███████████████████████████████████████████████████████████████████| 5/5 [00:16<00:00, 3.33s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Split by node: 100%|██████████████████████████████████████████████████████████████████████| 5/5 [00:13<00:00, 2.76s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Too many dataloader workers: 14 (max is dataset.n_shards=7). Stopping 7 dataloader workers.
To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary t
o have a number of workers greater than dataset.n_shards=7. To enable more parallelism, please split the dataset in more
files than 7.
Too many workers: 100%|███████████████████████████████████████████████████████████████████| 5/5 [00:15<00:00, 3.03s/it]
```
</details>
<details>
<summary> With 7 examples: </summary>
```
Multiple workers: 100%|███████████████████████████████████████████████████████████████████| 8/8 [00:13<00:00, 1.71s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Split by node: 100%|██████████████████████████████████████████████████████████████████████| 8/8 [00:11<00:00, 1.38s/it]
Assigning 7 shards (or data sources) of the dataset to each node.
Too many dataloader workers: 14 (max is dataset.n_shards=7). Stopping 7 dataloader workers.
To parallelize data loading, we give each process some shards (or data sources) to process. Therefore it's unnecessary to have a number of workers greater than dataset.n_shards=7. To enable more parallelism, please split the dataset in more files than 7.
Too many workers: 88%|██████████████████████████████████████████████████████████▋ | 7/8 [00:13<00:01, 1.89s/it]
Traceback (most recent call last):
File "distil-whisper/test_librispeech.py", line 36, in <module>
for i, batch in tqdm(enumerate(dataloader), total=num_examples, desc="Too many workers"):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/tqdm/std.py", line 1178, in __iter__
for obj in iterable:
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
data = self._next_data()
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1325, in _next_data
return self._process_data(data)
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/_utils.py", line 644, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 7.
Original Traceback (most recent call last):
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch
data.append(next(self.dataset_iter))
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 986, in __iter__
yield from self._iter_pytorch(ex_iterable)
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 920, in _iter_pytorch
for key, example in ex_iterable.shard_data_sources(worker_info.id, worker_info.num_workers):
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 540, in shard_data_sources
self.ex_iterable.shard_data_sources(worker_id, num_workers),
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 796, in shard_data_sources
self.ex_iterable.shard_data_sources(worker_id, num_workers),
File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 126, in shard_data_sources
requested_gen_kwargs = _merge_gen_kwargs([gen_kwargs_list[i] for i in shard_indices])
File "/home/sanchitgandhi/datasets/src/datasets/utils/sharding.py", line 76, in _merge_gen_kwargs
for key in gen_kwargs_list[0]
IndexError: list index out of range
```
</details>
### Expected behavior
Should pass for both 5 and 7 examples
### Environment info
- `datasets` version: 2.12.1.dev0
- Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5881/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5881/timeline | null | null | null | null | false | [
"cc @lhoestq in case you have any ideas here! Might need a multi-host set-up to debug (can give you access to a JAX one if you need)"
] |
https://api.github.com/repos/huggingface/datasets/issues/6021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6021/comments | https://api.github.com/repos/huggingface/datasets/issues/6021/events | https://github.com/huggingface/datasets/pull/6021 | 1,799,785,904 | PR_kwDODunzps5VP11Q | 6,021 | [docs] Update return statement of index search | [] | closed | false | null | 2 | 2023-07-11T21:33:32Z | 2023-07-12T17:13:02Z | 2023-07-12T17:03:00Z | null | Clarifies in the return statement of the docstring that the retrieval score is `IndexFlatL2` by default (see [PR](https://github.com/huggingface/transformers/issues/24739) and internal Slack [convo](https://huggingface.slack.com/archives/C01229B19EX/p1689105179711689)), and fixes the formatting because multiple return values are not supported. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6021/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6021/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6021.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6021",
"merged_at": "2023-07-12T17:03:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6021.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6021"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007697 / 0.011353 (-0.003656) | 0.004233 / 0.011008 (-0.006776) | 0.087890 / 0.038508 (0.049382) | 0.065305 / 0.023109 (0.042196) | 0.366919 / 0.275898 (0.091020) | 0.399656 / 0.323480 (0.076176) | 0.006753 / 0.007986 (-0.001232) | 0.003428 / 0.004328 (-0.000900) | 0.070180 / 0.004250 (0.065930) | 0.054164 / 0.037052 (0.017112) | 0.377130 / 0.258489 (0.118641) | 0.403456 / 0.293841 (0.109615) | 0.042639 / 0.128546 (-0.085907) | 0.012396 / 0.075646 (-0.063250) | 0.314235 / 0.419271 (-0.105036) | 0.061976 / 0.043533 (0.018443) | 0.376959 / 0.255139 (0.121820) | 0.433313 / 0.283200 (0.150113) | 0.031253 / 0.141683 (-0.110430) | 1.555749 / 1.452155 (0.103594) | 1.643905 / 1.492716 (0.151189) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208630 / 0.018006 (0.190624) | 0.519532 / 0.000490 (0.519042) | 0.003719 / 0.000200 (0.003519) | 0.000099 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027403 / 0.037411 (-0.010008) | 0.080990 / 0.014526 (0.066464) | 0.090424 / 0.176557 (-0.086133) | 0.153922 / 0.737135 (-0.583213) | 0.098156 / 0.296338 (-0.198183) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.519453 / 0.215209 (0.304244) | 5.100089 / 2.077655 (3.022434) | 2.212165 / 1.504120 (0.708045) | 1.894405 / 1.541195 (0.353210) | 1.922914 / 1.468490 (0.454424) | 0.762443 / 4.584777 (-3.822334) | 4.669214 / 3.745712 (0.923502) | 5.016066 / 5.269862 (-0.253796) | 3.128821 / 4.565676 (-1.436856) | 0.091541 / 0.424275 (-0.332734) | 0.007582 / 0.007607 (-0.000026) | 0.652753 / 0.226044 (0.426709) | 6.601375 / 2.268929 (4.332446) | 3.076948 / 55.444624 (-52.367677) | 2.250544 / 6.876477 (-4.625933) | 2.404059 / 2.142072 (0.261987) | 0.994917 / 4.805227 (-3.810311) | 0.200318 / 6.500664 (-6.300346) | 0.069354 / 0.075469 (-0.006115) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.482559 / 1.841788 (-0.359229) | 20.722092 / 8.074308 (12.647784) | 17.703217 / 10.191392 (7.511825) | 0.215370 / 0.680424 (-0.465053) | 0.028208 / 0.534201 (-0.505993) | 0.425992 / 0.579283 (-0.153291) | 0.492785 / 0.434364 (0.058421) | 0.474154 / 0.540337 (-0.066183) | 0.644599 / 1.386936 (-0.742337) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008372 / 0.011353 (-0.002981) | 0.004543 / 0.011008 (-0.006465) | 0.070564 / 0.038508 (0.032056) | 0.066855 / 0.023109 (0.043746) | 0.386724 / 0.275898 (0.110826) | 0.432184 / 0.323480 (0.108704) | 0.005250 / 0.007986 (-0.002736) | 0.003630 / 0.004328 (-0.000698) | 0.069310 / 0.004250 (0.065060) | 0.055759 / 0.037052 (0.018707) | 0.375789 / 0.258489 (0.117299) | 0.417335 / 0.293841 (0.123494) | 0.043424 / 0.128546 (-0.085122) | 0.013106 / 0.075646 (-0.062541) | 0.087836 / 0.419271 (-0.331436) | 0.057770 / 0.043533 (0.014237) | 0.396694 / 0.255139 (0.141555) | 0.439350 / 0.283200 (0.156150) | 0.031660 / 0.141683 (-0.110023) | 1.571339 / 1.452155 (0.119185) | 1.667169 / 1.492716 (0.174452) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180534 / 0.018006 (0.162528) | 0.540027 / 0.000490 (0.539537) | 0.003573 / 0.000200 (0.003373) | 0.000141 / 0.000054 (0.000086) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031380 / 0.037411 (-0.006032) | 0.083762 / 0.014526 (0.069236) | 0.098166 / 0.176557 (-0.078390) | 0.160761 / 0.737135 (-0.576374) | 0.097683 / 0.296338 (-0.198656) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.568074 / 0.215209 (0.352865) | 5.660544 / 2.077655 (3.582889) | 2.416698 / 1.504120 (0.912578) | 2.177096 / 1.541195 (0.635901) | 2.206178 / 1.468490 (0.737688) | 0.844864 / 4.584777 (-3.739912) | 4.793636 / 3.745712 (1.047923) | 7.062387 / 5.269862 (1.792525) | 4.201228 / 4.565676 (-0.364449) | 0.091997 / 0.424275 (-0.332279) | 0.007881 / 0.007607 (0.000274) | 0.679466 / 0.226044 (0.453422) | 6.580268 / 2.268929 (4.311340) | 3.229907 / 55.444624 (-52.214717) | 2.524877 / 6.876477 (-4.351600) | 2.463796 / 2.142072 (0.321723) | 0.975627 / 4.805227 (-3.829600) | 0.186670 / 6.500664 (-6.313994) | 0.065307 / 0.075469 (-0.010163) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.501447 / 1.841788 (-0.340340) | 21.231037 / 8.074308 (13.156729) | 17.591671 / 10.191392 (7.400279) | 0.212745 / 0.680424 (-0.467679) | 0.026100 / 0.534201 (-0.508101) | 0.428391 / 0.579283 (-0.150892) | 0.535268 / 0.434364 (0.100904) | 0.506733 / 0.540337 (-0.033604) | 0.660832 / 1.386936 (-0.726104) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/426/comments | https://api.github.com/repos/huggingface/datasets/issues/426/events | https://github.com/huggingface/datasets/issues/426 | 664,203,897 | MDU6SXNzdWU2NjQyMDM4OTc= | 426 | [FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 6 | 2020-07-23T05:00:41Z | 2021-03-12T09:34:12Z | 2020-09-07T14:48:04Z | null | It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together? | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/426/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/426/timeline | null | completed | null | null | false | [
"Yes that's definitely something we plan to add ^^",
"Yes, that would be nice. We could take a look at what tensorflow `tf.data` does under the hood for instance.",
"So `tf.data.Dataset.map()` returns a `ParallelMapDataset` if `num_parallel_calls is not None` [link](https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/data/ops/dataset_ops.py#L1623).\r\n\r\nThere, `num_parallel_calls` is turned into a tensor and and fed to `gen_dataset_ops.parallel_map_dataset` where it looks like tensorflow takes over.\r\n\r\nWe could start with something simple like a thread or process pool that `imap`s over some shards.\r\n ",
"Multiprocessing was added in #552 . You can set the number of processes with `.map(..., num_proc=...)`. It also works for `filter`\r\n\r\nClosing this one, but feel free to reo-open if you have other questions",
"@lhoestq Great feature implemented! Do you have plans to add it to official tutorials [Processing data in a Dataset](https://huggingface.co/docs/datasets/processing.html?highlight=save#augmenting-the-dataset)? It took me sometime to find this parallel processing api.",
"Thanks for the heads up !\r\n\r\nI just added a paragraph about multiprocessing:\r\nhttps://huggingface.co/docs/datasets/master/processing.html#multiprocessing"
] |
https://api.github.com/repos/huggingface/datasets/issues/2024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2024/comments | https://api.github.com/repos/huggingface/datasets/issues/2024/events | https://github.com/huggingface/datasets/pull/2024 | 827,842,962 | MDExOlB1bGxSZXF1ZXN0NTg5NzEzNDAy | 2,024 | Remove print statement from mnist.py | [] | closed | false | null | 1 | 2021-03-10T14:39:58Z | 2021-03-11T18:03:52Z | 2021-03-11T18:03:51Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2024/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2024/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2024.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2024",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2024.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2024"
} | true | [
"Thanks for noticing !\r\n#2020 fixed this earlier today though ^^'\r\n\r\nClosing this one"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/6089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6089/comments | https://api.github.com/repos/huggingface/datasets/issues/6089/events | https://github.com/huggingface/datasets/issues/6089 | 1,825,761,476 | I_kwDODunzps5s0ujE | 6,089 | AssertionError: daemonic processes are not allowed to have children | [] | open | false | null | 0 | 2023-07-28T06:04:00Z | 2023-07-28T06:04:00Z | null | null | ### Describe the bug
When I load_dataset with num_proc > 0 in a deamon process, I got an error:
```python
File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 564, in download_and_extract
return self.extract(self.download(url_or_urls))
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/download/download_manager.py", line 427, in download
downloaded_path_or_paths = map_nested(
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 468, in map_nested
mapped = parallel_map(function, iterable, num_proc, types, disable_tqdm, desc, _single_map_nested)
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/experimental.py", line 40, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 34, in parallel_map
return _map_with_multiprocessing_pool(
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/parallel/parallel.py", line 64, in _map_with_multiprocessing_pool
with Pool(num_proc, initargs=initargs, initializer=initializer) as pool:
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/context.py", line 119, in Pool
return Pool(processes, initializer, initargs, maxtasksperchild,
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 215, in __init__
self._repopulate_pool()
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 306, in _repopulate_pool
return self._repopulate_pool_static(self._ctx, self.Process,
^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/pool.py", line 329, in _repopulate_pool_static
w.start()
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/multiprocessing/process.py", line 118, in start
assert not _current_process._config.get('daemon'), ^^^^^^^^^^^^^^^^^
AssertionError: daemonic processes are not allowed to have children
```
The download is io-intensive computing, may be datasets can replece the multi processing pool by a multi threading pool if in a deamon process.
### Steps to reproduce the bug
1. start a deamon process
2. run load_dataset with num_proc > 0
### Expected behavior
No error.
### Environment info
Python 3.11.4
datasets latest master | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6089/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6089/timeline | null | null | null | null | false | [
"We could add a \"threads\" parallel backend to `datasets.parallel.parallel_backend` to support downloading with threads but note that `download_and_extract` also decompresses archives, and this is a CPU-intensive task, which is not ideal for (Python) threads (good for IO-intensive tasks).",
"> We could add a \"threads\" parallel backend to `datasets.parallel.parallel_backend` to support downloading with threads but note that `download_and_extract` also decompresses archives, and this is a CPU-intensive task, which is not ideal for (Python) threads (good for IO-intensive tasks).\r\n\r\nGreat! Download takes more time than extract, multiple threads can download in parallel, which can speed up a lot."
] |
https://api.github.com/repos/huggingface/datasets/issues/4629 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4629/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4629/comments | https://api.github.com/repos/huggingface/datasets/issues/4629/events | https://github.com/huggingface/datasets/issues/4629 | 1,293,418,800 | I_kwDODunzps5NGAEw | 4,629 | Rename repo default branch to main | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | closed | false | null | 0 | 2022-07-04T17:16:10Z | 2022-07-06T15:49:57Z | 2022-07-06T15:49:57Z | null | Rename repository default branch to `main` (instead of current `master`).
Once renamed, users will have to manually update their local repos:
- [ ] Upstream:
```
git branch -m master main
git fetch upstream main
git branch -u upstream/main main
git remote set-head upstream -a
```
- [ ] Origin:
Rename fork default branch as well at: https://github.com/USERNAME/lam/settings/branches
Then:
```
git fetch origin main
git remote set-head origin -a
```
CC: @sgugger | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4629/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4629/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/799 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/799/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/799/comments | https://api.github.com/repos/huggingface/datasets/issues/799/events | https://github.com/huggingface/datasets/pull/799 | 735,551,165 | MDExOlB1bGxSZXF1ZXN0NTE0OTIzNDMx | 799 | switch amazon reviews class label order | [] | closed | false | null | 0 | 2020-11-03T18:38:58Z | 2020-11-03T18:44:14Z | 2020-11-03T18:44:10Z | null | Switches the label order to be more intuitive for amazon reviews, #791. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/799/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/799/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/799.diff",
"html_url": "https://github.com/huggingface/datasets/pull/799",
"merged_at": "2020-11-03T18:44:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/799.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/799"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2665/comments | https://api.github.com/repos/huggingface/datasets/issues/2665/events | https://github.com/huggingface/datasets/pull/2665 | 946,822,036 | MDExOlB1bGxSZXF1ZXN0NjkxOTMwNjky | 2,665 | Adds APPS dataset to the hub [WIP] | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 1 | 2021-07-17T13:13:17Z | 2022-10-03T09:38:10Z | 2022-10-03T09:38:10Z | null | A loading script for [APPS dataset](https://github.com/hendrycks/apps) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2665/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2665/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/2665.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2665",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2665.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2665"
} | true | [
"Thanks for your contribution, @arampacha. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] |
https://api.github.com/repos/huggingface/datasets/issues/3144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3144/comments | https://api.github.com/repos/huggingface/datasets/issues/3144/events | https://github.com/huggingface/datasets/issues/3144 | 1,033,573,760 | I_kwDODunzps49mxWA | 3,144 | Infer the features if missing | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2021-10-22T13:17:33Z | 2022-09-08T08:23:10Z | 2022-09-08T08:23:10Z | null | **Is your feature request related to a problem? Please describe.**
Some datasets, in particular community datasets, have no info file, thus no features.
**Describe the solution you'd like**
If a dataset has no features, the first loaded data (5-10 rows) could be used to infer the type.
Related: `datasets` would provide a way to load the data, and get the rows AND the features as the result.
**Describe alternatives you've considered**
The HF hub could also provide some UI to help the dataset maintainers to explicit the types of their rows, or automatically infer them as an initial proposal. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3144/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3144/timeline | null | completed | null | null | false | [
"Done by @lhoestq here: https://github.com/huggingface/datasets/pull/4500 (https://github.com/huggingface/datasets/pull/4500/files#diff-02930e1d966f4b41f9ddf15d961f16f5466d9bee583138657018c7329f71aa43R1255 in particular)\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3072/comments | https://api.github.com/repos/huggingface/datasets/issues/3072/events | https://github.com/huggingface/datasets/pull/3072 | 1,025,233,152 | PR_kwDODunzps4tJNnD | 3,072 | Fix pathlib patches for streaming | [] | closed | false | null | 0 | 2021-10-13T13:11:15Z | 2021-10-13T13:31:05Z | 2021-10-13T13:31:05Z | null | Fix issue https://github.com/huggingface/datasets/issues/2866 (for good this time)
`counter` now works in both streaming and non-streaming mode.
And the `AttributeError: 'str' object has no attribute 'as_posix'` related to the patch of Path.open is fixed as well
Note : the patches should only affect the datasets module, not the user's ones ! That's why we should probably use something else than patch.object to patch the Path class' methods.
cc @severo @albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3072/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3072.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3072",
"merged_at": "2021-10-13T13:31:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3072.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3072"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/592 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/592/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/592/comments | https://api.github.com/repos/huggingface/datasets/issues/592/events | https://github.com/huggingface/datasets/pull/592 | 696,619,986 | MDExOlB1bGxSZXF1ZXN0NDgyNjc4MDkw | 592 | Test in memory and on disk | [] | closed | false | null | 0 | 2020-09-09T08:59:30Z | 2020-09-09T13:50:04Z | 2020-09-09T13:50:03Z | null | I added test parameters to do every test both in memory and on disk.
I also found a bug in concatenate_dataset thanks to the new tests and fixed it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/592/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/592/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/592.diff",
"html_url": "https://github.com/huggingface/datasets/pull/592",
"merged_at": "2020-09-09T13:50:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/592.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/592"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4779/comments | https://api.github.com/repos/huggingface/datasets/issues/4779/events | https://github.com/huggingface/datasets/issues/4779 | 1,325,997,225 | I_kwDODunzps5PCRyp | 4,779 | Loading natural_questions requires apache_beam even with existing preprocessed data | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-08-02T15:06:57Z | 2022-08-02T16:03:18Z | 2022-08-02T16:03:18Z | null | ## Describe the bug
When loading "natural_questions", the package "apache_beam" is required:
```
ImportError: To be able to use natural_questions, you need to install the following dependency: apache_beam.
Please install it using 'pip install apache_beam' for instance'
```
This requirement is unnecessary, once there exists preprocessed data and the script just needs to download it.
## Steps to reproduce the bug
```python
load_dataset("natural_questions", "dev", split="validation", revision="main")
```
## Expected results
No ImportError raised.
## Actual results
```
ImportError Traceback (most recent call last)
[<ipython-input-3-c938e7c05d02>](https://localhost:8080/#) in <module>()
----> 1 from datasets import load_dataset; ds = load_dataset("natural_questions", "dev", split="validation", revision="main")
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1732 revision=revision,
1733 use_auth_token=use_auth_token,
-> 1734 **config_kwargs,
1735 )
1736
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1504 download_mode=download_mode,
1505 data_dir=data_dir,
-> 1506 data_files=data_files,
1507 )
1508
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1245 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1246 ) from None
-> 1247 raise e1 from None
1248 else:
1249 raise FileNotFoundError(
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1180 download_config=download_config,
1181 download_mode=download_mode,
-> 1182 dynamic_modules_path=dynamic_modules_path,
1183 ).get_module()
1184 elif path.count("/") == 1: # community dataset on the Hub
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in get_module(self)
490 base_path=hf_github_url(path=self.name, name="", revision=revision),
491 imports=imports,
--> 492 download_config=self.download_config,
493 )
494 additional_files = [(config.DATASETDICT_INFOS_FILENAME, dataset_infos_path)] if dataset_infos_path else []
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in _download_additional_modules(name, base_path, imports, download_config)
214 _them_str = "them" if len(needs_to_be_installed) > 1 else "it"
215 raise ImportError(
--> 216 f"To be able to use {name}, you need to install the following {_depencencies_str}: "
217 f"{', '.join(needs_to_be_installed)}.\nPlease install {_them_str} using 'pip install "
218 f"{' '.join(needs_to_be_installed.values())}' for instance'"
ImportError: To be able to use natural_questions, you need to install the following dependency: apache_beam.
Please install it using 'pip install apache_beam' for instance'
```
## Environment info
Colab notebook.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4779/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4779/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2526 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2526/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2526/comments | https://api.github.com/repos/huggingface/datasets/issues/2526/events | https://github.com/huggingface/datasets/issues/2526 | 925,929,228 | MDU6SXNzdWU5MjU5MjkyMjg= | 2,526 | Add COCO datasets | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | open | false | null | 17 | 2021-06-21T07:48:32Z | 2023-06-22T14:12:18Z | null | null | ## Adding a Dataset
- **Name:** COCO
- **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset.
- **Paper + website:** https://cocodataset.org/#home
- **Data:** https://cocodataset.org/#download
- **Motivation:** It would be great to have COCO available in HuggingFace datasets, as we are moving beyond just text. COCO includes multi-modalities (images + text), as well as a huge amount of images annotated with objects, segmentation masks, keypoints etc., on which models like DETR (which I recently added to HuggingFace Transformers) are trained. Currently, one needs to download everything from the website and place it in a local folder, but it would be much easier if we can directly access it through the datasets API.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2526/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2526/timeline | null | null | null | null | false | [
"I'm currently adding it, the entire dataset is quite big around 30 GB so I add splits separately. You can take a look here https://huggingface.co/datasets/merve/coco",
"I talked to @lhoestq and it's best if I download this dataset through TensorFlow datasets instead, so I'll be implementing that one really soon.\r\n@NielsRogge ",
"I started adding COCO, will be done tomorrow EOD\r\nmy work so far https://github.com/merveenoyan/datasets (my fork)",
"Hi Merve @merveenoyan , thank you so much for your great contribution! May I ask about the current progress of your implementation? Cuz I see the pull request is still in progess here. Or can I just run the COCO scripts in your fork repo?",
"Hello @yixuanren I had another prioritized project about to be merged, but I'll start continuing today will finish up soon. ",
"> Hello @yixuanren I had another prioritized project about to be merged, but I'll start continuing today will finish up soon.\r\n\r\nIt's really nice of you!! I see you've commited another version just now",
"@yixuanren we're working on it, will be available soon, thanks a lot for your patience",
"Hi @NielsRogge and @merveenoyan, did you find a way to load a dataset with COCO annotations to HF's hub?\r\nI have a panoptic segmentation dataset in COCO format and would like to share it with the community.\r\nThanks in advance :)",
"The COCO format is not supported out of the box in the HF's hub - you'd need to reformat it to an [ImageFolder](https://huggingface.co/docs/datasets/image_dataset#imagefolder) with metadata format, or write a [loading script](https://huggingface.co/docs/datasets/image_dataset#loading-script)",
"> The COCO format is not supported out of the box in the HF's hub - you'd need to reformat it to an [ImageFolder](https://huggingface.co/docs/datasets/image_dataset#imagefolder) with metadata format, or write a [loading script](https://huggingface.co/docs/datasets/image_dataset#loading-script)\r\n\r\nHi @lhoestq , thank you for your quick reply.\r\nI've correctly created a metadata.jsonl file for a dataset with instance segmentation annotations [here](https://huggingface.co/datasets/lombardata/data_2017)\r\nbut do not understand how I can integrate panoptic annotations with the metadata format of ImageFolder datasets. The \"problem\" with panoptic annotations is that we have a folder with images, a json file with annotations and another folder with png annotations.\r\n\r\nI checked between all the datasets already published on HuggingFace and, the only one who has uploaded a correct panoptic dataset is @NielsRogge [here](https://huggingface.co/datasets/nielsr/coco-panoptic-val2017) and [here](https://huggingface.co/datasets/nielsr/ade20k-panoptic-demo). Indeed he accomplished to have three fields : \r\n1.image (image)\r\n2.label (image)\r\n3.segments_info (list)\r\nbut I not find the corresponding code that allows to upload a panoptic dataset from this 3 sources.\r\nCan you please share an example code?\r\nThanks !",
"Both were uploaded using `ds.push_to_hub()` :)\r\n\r\nYou can get a Dataset from a python dictionary using `ds = Dataset.from_dict(...)` and casts the paths to images to the `Image()` type using `ds = ds.cast_column(\"image\", Image())`.\r\n\r\n```python\r\nfrom datasets import Dataset, Image\r\n\r\nds = Dataset.from_dict(...)\r\nds = ds.cast_column(\"image\", Image())\r\nds = ds.cast_column(\"label\", Image())\r\nds.push_to_hub(...)\r\n```",
"> Both were uploaded using `ds.push_to_hub()` :)\r\n> \r\n> You can get a Dataset from a python dictionary using `ds = Dataset.from_dict(...)` and casts the paths to images to the `Image()` type using `ds = ds.cast_column(\"image\", Image())`.\r\n> \r\n> ```python\r\n> from datasets import Dataset, Image\r\n> \r\n> ds = Dataset.from_dict(...)\r\n> ds = ds.cast_column(\"image\", Image())\r\n> ds = ds.cast_column(\"label\", Image())\r\n> ds.push_to_hub(...)\r\n> ```\r\n\r\nThank you very much @lhoestq , I succesfully created a hf dataset [here](https://huggingface.co/datasets/lombardata/panoptic_2023_06_21) with the two fields :\r\n1.image (image)\r\n2.label (image)\r\nfollowing your suggestions. Now still remain the problem of uploading **segments_info** information to the dataset.\r\nThere is a function that easily imports the _panoptic_coco_annotation.json_ file to a segment_info field?\r\nI think we must define a **list_of_segment**, i.e. a list of lists of this type : \r\n```python\r\n[ { \"area\": 214858, \"bbox\": [ 0, 0, 511, 760 ], \"category_id\": 0, \"id\": 7895160, \"iscrowd\": 0 }, { \"area\": 73067, \"bbox\": [ 98, 719, 413, 253 ], \"category_id\": 3, \"id\": 3289680, \"iscrowd\": 0 }, { \"area\": 832, \"bbox\": [ 53, 0, 101, 16 ], \"category_id\": 5, \"id\": 5273720, \"iscrowd\": 0 }, { \"area\": 70668, \"bbox\": [ 318, 60, 191, 392 ], \"category_id\": 8, \"id\": 15132390, \"iscrowd\": 0 }, { \"area\": 32696, \"bbox\": [ 0, 100, 78, 872 ], \"category_id\": 18, \"id\": 472063, \"iscrowd\": 0 }, { \"area\": 76045, \"bbox\": [ 42, 48, 264, 924 ], \"category_id\": 37, \"id\": 16713830, \"iscrowd\": 0 }, { \"area\": 27103, \"bbox\": [ 288, 482, 216, 306 ], \"category_id\": 47, \"id\": 16753408, \"iscrowd\": 0 } ]\r\n```\r\nand then apply again the **cast_column** function [here](https://github.com/huggingface/datasets/blob/2.13.0/src/datasets/arrow_dataset.py#L2060) but with a list as a second argument, like : \r\n```python\r\nfrom datasets import Dataset, Image\r\nds = ds.cast_column(\"image\", Image())\r\nds = ds.cast_column(\"label\", Image())\r\nds = ds.cast_column(\"segments_info\", list)\r\n```\r\nbut I do not see how to transfer the information of the _panoptic_coco_annotation.json_ to a list of lists of this type : \r\n```python\r\n[ { \"area\": 214858, \"bbox\": [ 0, 0, 511, 760 ], \"category_id\": 0, \"id\": 7895160, \"iscrowd\": 0 }, { \"area\": 73067, \"bbox\": [ 98, 719, 413, 253 ], \"category_id\": 3, \"id\": 3289680, \"iscrowd\": 0 }, { \"area\": 832, \"bbox\": [ 53, 0, 101, 16 ], \"category_id\": 5, \"id\": 5273720, \"iscrowd\": 0 }, { \"area\": 70668, \"bbox\": [ 318, 60, 191, 392 ], \"category_id\": 8, \"id\": 15132390, \"iscrowd\": 0 }, { \"area\": 32696, \"bbox\": [ 0, 100, 78, 872 ], \"category_id\": 18, \"id\": 472063, \"iscrowd\": 0 }, { \"area\": 76045, \"bbox\": [ 42, 48, 264, 924 ], \"category_id\": 37, \"id\": 16713830, \"iscrowd\": 0 }, { \"area\": 27103, \"bbox\": [ 288, 482, 216, 306 ], \"category_id\": 47, \"id\": 16753408, \"iscrowd\": 0 } ]\r\n```\r\nlike @NielsRogge has done [here](https://huggingface.co/datasets/nielsr/coco-panoptic-val2017) and [here](https://huggingface.co/datasets/nielsr/ade20k-panoptic-demo).\r\nThank you again for your help and have a good day !",
"You can pass this data in .from_dict() - no need to cast anything for this column\r\n\r\n```python\r\nds = Dataset.from_dict({\r\n \"image\": [...],\r\n \"label\": [...],\r\n \"segments_info\": [...],\r\n)}\r\n```\r\n\r\nwhere `segments_info` is the list of the segment_infos of all the examples in the dataset, and therefore is a list of lists of dicts.",
"> You can pass this data in .from_dict() - no need to cast anything for this column\r\n> \r\n> ```python\r\n> ds = Dataset.from_dict({\r\n> \"image\": [...],\r\n> \"label\": [...],\r\n> \"segments_info\": [...],\r\n> )}\r\n> ```\r\n> \r\n> where `segments_info` is the list of the segment_infos of all the examples in the dataset, and therefore is a list of lists of dicts.\r\n\r\nThank you for the quick reply @lhoestq , but then how to generate the `segments_info` list of lists of dicts starting from a _panoptic_coco_annotation.json_ file ?\r\n\r\n\r\n",
"You read the JSON file and transform the data yourself. I don't think there's an automatic converter somewhere",
"> You read the JSON file and transform the data yourself. I don't think there's an automatic converter somewhere\r\n\r\nPerfect, I've done it and succesfully uploaded a new dataset [here](https://huggingface.co/datasets/lombardata/panoptic_2023_06_22), but I've (I hope) a last problem.\r\nThe dataset has currently 302 images and, when I upload it to the hub, only the first page of images is correctly uploaded.\r\nWhen I try to see the second/third/fourth page of items of my dataset, I can see that the fields **segments_info** and **image_name** are correctly uploaded, while the images are not (the \"null\" string is shown everywhere).\r\n\r\nI've checked the path of images that are not uploaded and they exists, is there a problem with the size of the dataset ?\r\nHow can I upload the whole dataset to the hub ?\r\nThank you again @lhoestq and have a good day !",
"Awesome ! Your dataset looks all good 🤗 \r\n\r\nThe `null` in the viewer is a bug on our side, let me investigate"
] |
https://api.github.com/repos/huggingface/datasets/issues/3303 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3303/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3303/comments | https://api.github.com/repos/huggingface/datasets/issues/3303/events | https://github.com/huggingface/datasets/issues/3303 | 1,059,129,732 | I_kwDODunzps4_IQmE | 3,303 | DataCollatorWithPadding: TypeError | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-11-20T11:59:55Z | 2021-11-21T07:05:37Z | 2021-11-21T07:05:37Z | null | Hi,
I am following the HuggingFace course. I am now at Fine-tuning [https://huggingface.co/course/chapter3/3?fw=tf](https://huggingface.co/course/chapter3/3?fw=tf). When I set up `DataCollatorWithPadding` as following I got an error while trying to reproduce the course code in Kaggle. This error occurs with either a CPU-only-device or a GPU-device.
Input:
```checkpoint = 'bert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
```
Output:
```---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipykernel_42/1563280798.py in <module>
1 checkpoint = 'bert-base-uncased'
2 tokenizer = AutoTokenizer.from_pretrained(checkpoint)
----> 3 data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="pt")
TypeError: __init__() got an unexpected keyword argument 'return_tensors'
```
When I call `help` method, it too confirms that there is no argument `return_tensors`.
Input:
```
help(DataCollatorWithPadding.__init__)
```
Output:
```
Help on function __init__ in module transformers.data.data_collator:
__init__(self, tokenizer: transformers.tokenization_utils_base.PreTrainedTokenizerBase, padding: Union[bool, str, transformers.file_utils.PaddingStrategy] = True, max_length: Union[int, NoneType] = None, pad_to_multiple_of: Union[int, NoneType] = None) -> None
```
But, the source file *[Data Collator - docs](https://huggingface.co/transformers/main_classes/data_collator.html#datacollatorwithpadding)* says that there is such an argument. By default, it returns Pytorch tensors while I need TF tensors.
Where do I miss?
Please help me. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3303/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3303/timeline | null | completed | null | null | false | [
"\r\n> \r\n> Input:\r\n> \r\n> ```\r\n> tokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n> data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=\"tf\")\r\n> ```\r\n> \r\n> Output:\r\n> \r\n> ```\r\n> TypeError Traceback (most recent call last)\r\n> /tmp/ipykernel_42/1563280798.py in <module>\r\n> 1 checkpoint = 'bert-base-uncased'\r\n> 2 tokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n> ----> 3 data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors=\"pt\")\r\n> TypeError: __init__() got an unexpected keyword argument 'return_tensors'\r\n> ```\r\n> \r\n\r\nThe issue is due to the older version of transformers and datasets. It has been resolved by upgrading their versions.\r\n\r\n`# upgrade transformers and datasets to latest versions`\r\n`!pip install --upgrade transformers`\r\n`!pip install --upgrade datasets`\r\n\r\nCheers!"
] |
https://api.github.com/repos/huggingface/datasets/issues/1966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1966/comments | https://api.github.com/repos/huggingface/datasets/issues/1966/events | https://github.com/huggingface/datasets/pull/1966 | 819,101,253 | MDExOlB1bGxSZXF1ZXN0NTgyMjU2MzE0 | 1,966 | Fix metrics collision in separate multiprocessed experiments | [] | closed | false | null | 1 | 2021-03-01T17:45:18Z | 2021-03-02T13:05:45Z | 2021-03-02T13:05:44Z | null | As noticed in #1942 , there's a issue with locks if you run multiple separate evaluation experiments in a multiprocessed setup.
Indeed there is a time span in Metric._finalize() where the process 0 loses its lock before re-acquiring it. This is bad since the lock of the process 0 tells the other process that the corresponding cache file is available for writing/reading/deleting: we end up having one metric cache that collides with another one. This can raise FileNotFound errors when a metric tries to read the cache file and if the second conflicting metric deleted it.
To fix that I made sure that the lock file of the process 0 stays acquired from the cache file creation to the end of the metric computation. This way the other metrics can simply sample a new hashing name in order to avoid the collision.
Finally I added missing tests for separate experiments in distributed setup. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1966/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1966/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1966.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1966",
"merged_at": "2021-03-02T13:05:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1966.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1966"
} | true | [
"Since the failure was originally intermittent, there is no 100% telling that the problem is gone. \r\nBut if my artificial race condition setup https://github.com/huggingface/datasets/issues/1942#issuecomment-787124529 is to be the litmus test then the problem has been fixed, as with this PR branch that particular race condition is taken care of correctly.\r\n\r\nThank you for taking care of this, @lhoestq - locking can be very tricky to do right!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4642/comments | https://api.github.com/repos/huggingface/datasets/issues/4642/events | https://github.com/huggingface/datasets/issues/4642 | 1,295,748,083 | I_kwDODunzps5NO4vz | 4,642 | Streaming issue for ccdv/pubmed-summarization | [] | closed | false | null | 3 | 2022-07-06T12:13:07Z | 2022-07-06T14:17:34Z | 2022-07-06T14:17:34Z | null | ### Link
https://huggingface.co/datasets/ccdv/pubmed-summarization
### Description
This was reported by a [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/7). It seems like streaming doesn't work due to the way the dataset loading script is defined?
```
Status code: 400
Exception: FileNotFoundError
Message: https://huggingface.co/datasets/ccdv/pubmed-summarization/resolve/main/train.zip/train.txt
```
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4642/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4642/timeline | null | completed | null | null | false | [
"Thanks for reporting @lewtun.\r\n\r\nI confirm there is an issue with streaming: it does not stream locally. ",
"Oh, after investigation, the source of the issue is in the Hub dataset loading script.\r\n\r\nI'm opening a PR on the Hub dataset.",
"I've opened a PR on their Hub dataset to support streaming: https://huggingface.co/datasets/ccdv/pubmed-summarization/discussions/2"
] |
https://api.github.com/repos/huggingface/datasets/issues/233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/233/comments | https://api.github.com/repos/huggingface/datasets/issues/233/events | https://github.com/huggingface/datasets/issues/233 | 630,432,132 | MDU6SXNzdWU2MzA0MzIxMzI= | 233 | Fail to download c4 english corpus | [] | closed | false | null | 5 | 2020-06-04T01:06:38Z | 2021-01-08T07:17:32Z | 2020-06-08T09:16:59Z | null | i run following code to download c4 English corpus.
```
dataset = nlp.load_dataset('c4', 'en', beam_runner='DirectRunner'
, data_dir='/mypath')
```
and i met failure as follows
```
Downloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/adam/.cache/huggingface/datasets/c4/en/2.3.0...
Traceback (most recent call last):
File "download_corpus.py", line 38, in <module>
, data_dir='/home/adam/data/corpus/en/c4')
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/load.py", line 520, in load_dataset
save_infos=save_infos,
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 420, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 816, in _download_and_prepare
dl_manager, verify_infos=False, pipeline=pipeline,
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/builder.py", line 457, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/adam/anaconda3/envs/adam/lib/python3.7/site-packages/nlp/datasets/c4/f545de9f63300d8d02a6795e2eb34e140c47e62a803f572ac5599e170ee66ecc/c4.py", line 175, in _split_generators
dl_manager.download_checksums(_CHECKSUMS_URL)
AttributeError: 'DownloadManager' object has no attribute 'download_checksums
```
can i get any advice? | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/233/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/233/timeline | null | completed | null | null | false | [
"Hello ! Thanks for noticing this bug, let me fix that.\r\n\r\nAlso for information, as specified in the changelog of the latest release, C4 currently needs to have a runtime for apache beam to work on. Apache beam is used to process this very big dataset and it can work on dataflow, spark, flink, apex, etc. You can find more info on beam datasets [here](https://github.com/huggingface/nlp/blob/master/docs/beam_dataset.md).\r\n\r\nOur goal in the future is to make available an already-processed version of C4 (as we do for wikipedia for example) so that users without apache beam runtimes can load it.",
"@lhoestq I am facing `IsADirectoryError` while downloading with this command.\r\nCan you pls look into it & help me.\r\nI'm using version 0.4.0 of `nlp`.\r\n\r\n```\r\ndataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n```\r\n\r\nHere's the complete stack trace.\r\n\r\n```\r\nDownloading and preparing dataset c4/en (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/devops/.cache/huggingface/datasets/c4/en/2.3.0/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7...\r\n\r\n---------------------------------------------------------------------------\r\nIsADirectoryError Traceback (most recent call last)\r\n<ipython-input-11-f622e6705e03> in <module>\r\n----> 1 dataset = load_dataset(\"c4\", 'en', data_dir='.', beam_runner='DirectRunner')\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)\r\n 547 # Download and prepare data\r\n 548 builder_instance.download_and_prepare(\r\n--> 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications,\r\n 550 )\r\n 551 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 461 if not downloaded_from_gcs:\r\n 462 self._download_and_prepare(\r\n--> 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 464 )\r\n 465 # Sync info\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos)\r\n 964 pipeline = beam_utils.BeamPipeline(runner=beam_runner, options=beam_options,)\r\n 965 super(BeamBasedBuilder, self)._download_and_prepare(\r\n--> 966 dl_manager, verify_infos=False, pipeline=pipeline,\r\n 967 ) # TODO handle verify_infos in beam datasets\r\n 968 # Run pipeline\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 516 split_dict = SplitDict(dataset_name=self.name)\r\n 517 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 518 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 519 # Checksums verification\r\n 520 if verify_infos:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/datasets/c4/096df5a27756d51957c959a2499453e60a08154971fceb017bbb29f54b11bef7/c4.py in _split_generators(self, dl_manager, pipeline)\r\n 187 if self.config.realnewslike:\r\n 188 files_to_download[\"realnews_domains\"] = _REALNEWS_DOMAINS_URL\r\n--> 189 file_paths = dl_manager.download_and_extract(files_to_download)\r\n 190 \r\n 191 if self.config.webtextlike:\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download_and_extract(self, url_or_urls)\r\n 218 extracted_path(s): `str`, extracted paths of given URL(s).\r\n 219 \"\"\"\r\n--> 220 return self.extract(self.download(url_or_urls))\r\n 221 \r\n 222 def get_recorded_sizes_checksums(self):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in download(self, url_or_urls)\r\n 156 lambda url: cached_path(url, download_config=self._download_config,), url_or_urls,\r\n 157 )\r\n--> 158 self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths)\r\n 159 return downloaded_path_or_paths\r\n 160 \r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/download_manager.py in _record_sizes_checksums(self, url_or_urls, downloaded_path_or_paths)\r\n 106 flattened_downloaded_path_or_paths = flatten_nested(downloaded_path_or_paths)\r\n 107 for url, path in zip(flattened_urls_or_urls, flattened_downloaded_path_or_paths):\r\n--> 108 self._recorded_sizes_checksums[url] = get_size_checksum_dict(path)\r\n 109 \r\n 110 def download_custom(self, url_or_urls, custom_download):\r\n\r\n/data/anaconda/envs/hf/lib/python3.6/site-packages/nlp/utils/info_utils.py in get_size_checksum_dict(path)\r\n 77 \"\"\"Compute the file size and the sha256 checksum of a file\"\"\"\r\n 78 m = sha256()\r\n---> 79 with open(path, \"rb\") as f:\r\n 80 for chunk in iter(lambda: f.read(1 << 20), b\"\"):\r\n 81 m.update(chunk)\r\n\r\nIsADirectoryError: [Errno 21] Is a directory: '/'\r\n\r\n```\r\n\r\nCan anyone please try to see what I am doing wrong or is this a bug?",
"I have the same problem as @prashant-kikani",
"Looks like a bug in the dataset script, can you open an issue ?",
"I see the same issue as @prashant-kikani. I'm using `datasets` version 1.2.0 to download C4."
] |
https://api.github.com/repos/huggingface/datasets/issues/5352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5352/comments | https://api.github.com/repos/huggingface/datasets/issues/5352/events | https://github.com/huggingface/datasets/issues/5352 | 1,490,796,414 | I_kwDODunzps5Y279- | 5,352 | __init__() got an unexpected keyword argument 'input_size' | [] | open | false | null | 2 | 2022-12-12T02:52:03Z | 2022-12-19T01:38:48Z | null | null | ### Describe the bug
I try to define a custom configuration with a input_size attribute following the instructions by "Specifying several dataset configurations" in https://huggingface.co/docs/datasets/v1.2.1/add_dataset.html
But when I load the dataset, I got an error "__init__() got an unexpected keyword argument 'input_size'"
### Steps to reproduce the bug
Following is the code to define the dataset:
class CsvConfig(datasets.BuilderConfig):
"""BuilderConfig for CSV."""
input_size: int = 2048
class MRF(datasets.ArrowBasedBuilder):
"""Archival MRF data"""
BUILDER_CONFIG_CLASS = CsvConfig
VERSION = datasets.Version("1.0.0")
BUILDER_CONFIGS = [
CsvConfig(name="default", version=VERSION, description="MRF data", input_size=2048),
]
...
def _generate_examples(self):
input_size = self.config.input_size
if input_size > 1000:
numin = 10000
else:
numin = 15000
Below is the code to load the dataset:
reader = load_dataset("default", input_size=1024)
### Expected behavior
I hope to pass the "input_size" parameter to MRF datasets, and change "input_size" to any value when loading the datasets.
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-4.18.0-305.3.1.el8.x86_64-x86_64-with-glibc2.31
- Python version: 3.9.12
- PyArrow version: 9.0.0
- Pandas version: 1.5.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5352/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5352/timeline | null | null | null | null | false | [
"Hi @J-shel, thanks for reporting.\r\n\r\nI think the issue comes from your call to `load_dataset`. As first argument, you should pass:\r\n- either the name of your dataset (\"mrf\") if this is already published on the Hub\r\n- or the path to the loading script of your dataset (\"path/to/your/local/mrf.py\").",
"Hi, following your suggestion, I changed my call to load_dataset. Below is the latest:\r\nreader = load_dataset('data/mrf.py',\"default\", input_size=1024, split=split, streaming=True, keep_in_memory=None)\r\nHowever, I still got the same error.\r\nI have one question that is if I only define input_size=2048 in BUILDER_CONFIGS, may I specify input_size=1024 when loading the dataset? Cause I found that I could only specify name=\"default\" since I only define name=\"default\" in BUILDER_CONFIGS."
] |
https://api.github.com/repos/huggingface/datasets/issues/940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/940/comments | https://api.github.com/repos/huggingface/datasets/issues/940/events | https://github.com/huggingface/datasets/pull/940 | 754,010,753 | MDExOlB1bGxSZXF1ZXN0NTI5OTc3OTQ2 | 940 | Add MSRA NER dataset | [] | closed | false | null | 1 | 2020-12-01T05:02:11Z | 2020-12-04T09:29:40Z | 2020-12-01T07:25:53Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/940/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/940/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/940.diff",
"html_url": "https://github.com/huggingface/datasets/pull/940",
"merged_at": "2020-12-01T07:25:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/940.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/940"
} | true | [
"LGTM, don't forget the tags ;)"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/1605 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1605/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1605/comments | https://api.github.com/repos/huggingface/datasets/issues/1605/events | https://github.com/huggingface/datasets/issues/1605 | 770,979,620 | MDU6SXNzdWU3NzA5Nzk2MjA= | 1,605 | Navigation version breaking | [] | closed | false | null | 1 | 2020-12-18T15:36:24Z | 2022-10-05T12:35:11Z | 2022-10-05T12:35:11Z | null | Hi,
when navigating docs (Chrome, Ubuntu) (e.g. on this page: https://huggingface.co/docs/datasets/loading_metrics.html#using-a-custom-metric-script) the version control dropdown has the wrong string displayed as the current version:

**Edit:** this actually happens _only_ if you open a link to a concrete subsection.
IMO, the best way to fix this without getting too deep into the intricacies of retrieving version numbers from the URL would be to change [this](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L112) line to:
```
let label = (version in versionMapping) ? version : stableVersion
```
which delegates the check to the (already maintained) keys of the version mapping dictionary & should be more robust. There's a similar ternary expression [here](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L97) which should also fail in this case.
I'd also suggest swapping this [block](https://github.com/huggingface/datasets/blob/master/docs/source/_static/js/custom.js#L80-L90) to `string.contains(version) for version in versionMapping` which might be more robust. I'd add a PR myself but I'm by no means competent in JS :)
I also have a side question wrt. docs versioning: I'm trying to make docs for a project which are versioned alike to your dropdown versioning. I was wondering how do you handle storage of multiple doc versions on your server? Do you update what `https://huggingface.co/docs/datasets` points to for every stable release & manually create new folders for each released version?
So far I'm building & publishing (scping) the docs to the server with a github action which works well for a single version, but would ideally need to reorder the public files triggered on a new release. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 1,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1605/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1605/timeline | null | completed | null | null | false | [
"Not relevant for our current docs :)."
] |
https://api.github.com/repos/huggingface/datasets/issues/5831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5831/comments | https://api.github.com/repos/huggingface/datasets/issues/5831/events | https://github.com/huggingface/datasets/issues/5831 | 1,701,813,835 | I_kwDODunzps5lb55L | 5,831 | [Bug]504 Server Error when loading dataset which was already cached | [] | open | false | null | 6 | 2023-05-09T10:31:07Z | 2023-05-10T01:48:20Z | null | null | ### Describe the bug
I have already cached the dataset using:
```
dataset = load_dataset("databricks/databricks-dolly-15k",
cache_dir="/mnt/data/llm/datasets/databricks-dolly-15k")
```
After that, I tried to load it again using the same machine, I got this error:
```
Traceback (most recent call last):
File "/mnt/home/llm/pythia/train.py", line 16, in <module>
dataset = load_dataset("databricks/databricks-dolly-15k",
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1773, in load_dataset
builder_instance = load_dataset_builder(
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1502, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1219, in dataset_module_factory
raise e1 from None
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1186, in dataset_module_factory
raise e
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/datasets/load.py", line 1160, in dataset_module_factory
dataset_info = hf_api.dataset_info(
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 120, in _inner_fn
return fn(*args, **kwargs)
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 1667, in dataset_info
hf_raise_for_status(r)
File "/mnt/data/conda/envs/pythia_ft/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py", line 301, in hf_raise_for_status
raise HfHubHTTPError(str(e), response=response) from e
huggingface_hub.utils._errors.HfHubHTTPError: 504 Server Error: Gateway Time-out for url: https://huggingface.co/api/datasets/databricks/databricks-dolly-15k
```
### Steps to reproduce the bug
1. cache the databrick-dolly-15k dataset using load_dataset, setting a cache_dir
2. use load_dataset again, setting the same cache_dir
### Expected behavior
Dataset loaded succuessfully.
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-4.18.0-372.16.1.el8_6.x86_64-x86_64-with-glibc2.27
- Python version: 3.9.16
- Huggingface_hub version: 0.14.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5831/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5831/timeline | null | reopened | null | null | false | [
"I am experiencing the same problem with the following environment:\r\n\r\n* `datasets` version: 2.11.0\r\n* Platform: `Linux 5.19.0-41-generic x86_64 GNU/Linux`\r\n* Python version: `3.8.5`\r\n* Huggingface_hub version: 0.13.3\r\n* PyArrow version: `11.0.0`\r\n* Pandas version: `1.5.3`\r\n\r\nTrying to get some diagnostics, I got the following: \r\n\r\n```python\r\n>>> from huggingface_hub import scan_cache_dir\r\n>>> sd = scan_cache_dir()\r\n>>> sd\r\nHFCacheInfo(size_on_disk=0, repos=frozenset(), warnings=[CorruptedCacheException('Repo path is not a directory: /home/myname/.cache/huggingface/hub/version_diffusers_cache.txt')])\r\n\r\n```\r\nHowever, that might also be because I had tried to manually specify the `cache_dir` and that resulted in trying to download the dataset again ... but into a folder one level higher up than it should have.\r\n\r\nNote that my issue is with the `huggan/wikiart` dataset, so it is not a dataset-specific issue.",
"same problem with a private dataset repo, seems the huggingface hub server got some connection problem?",
"Yes, dataset server seems down for now",
"@SingL3 You can avoid this error by setting the [`HF_DATASETS_OFFLINE`](https://huggingface.co/docs/datasets/v2.12.0/en/loading#offline) env variable to 1. By default, if an internet connection is available, we check whether the cache of a cached dataset is up-to-date.\r\n\r\n@lucidBrot `datasets`' cache is still not aligned with `huggigface_hub`'s. We plan to align it eventually.",
"Today we had a big issue affecting the Hugging Face Hub, thus all the `504 Server Error: Gateway Time-out` errors.\r\n\r\nIt is fixed now and loading your datasets should work as expected.",
"Hi, @albertvillanova.\r\nIf there is a locally cached version of datasets or something cache using huggingface_hub, when a network problem(either client or server) occurs, is it a better way to fallback to use the current cached version rather than raise a exception and exit?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2625/comments | https://api.github.com/repos/huggingface/datasets/issues/2625/events | https://github.com/huggingface/datasets/issues/2625 | 941,439,922 | MDU6SXNzdWU5NDE0Mzk5MjI= | 2,625 | ⚛️😇⚙️🔑 | [] | closed | false | null | 0 | 2021-07-11T12:14:34Z | 2021-07-12T05:55:59Z | 2021-07-12T05:55:59Z | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2625/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2625/timeline | null | completed | null | null | false | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/2154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2154/comments | https://api.github.com/repos/huggingface/datasets/issues/2154/events | https://github.com/huggingface/datasets/pull/2154 | 846,763,960 | MDExOlB1bGxSZXF1ZXN0NjA1ODM2Mjc1 | 2,154 | Adding the NorNE dataset for Norwegian POS and NER | [] | closed | false | null | 1 | 2021-03-31T14:22:50Z | 2021-04-01T09:27:00Z | 2021-04-01T09:16:08Z | null | NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
See #1720. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2154/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2154/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2154.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2154",
"merged_at": "2021-04-01T09:16:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2154.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2154"
} | true | [
"Awesome!"
] |
https://api.github.com/repos/huggingface/datasets/issues/1850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1850/comments | https://api.github.com/repos/huggingface/datasets/issues/1850/events | https://github.com/huggingface/datasets/pull/1850 | 804,412,249 | MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx | 1,850 | Add cord 19 dataset | [] | closed | false | null | 4 | 2021-02-09T10:22:08Z | 2021-02-09T15:16:26Z | 2021-02-09T15:16:26Z | null | Initial version only reading the metadata in CSV.
### Checklist:
- [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template
- [x] Fill the _DESCRIPTION and _CITATION variables
- [x] Implement _infos(), _split_generators() and _generate_examples()
- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.
- [x] Generate the metadata file dataset_infos.json for all configurations
- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card README.md using the template and at least fill the tags
- [x] Both tests for the real data and the dummy data pass.
### Extras:
- [x] add more metadata
- [x] add full text
- [x] add pre-computed document embedding | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1850/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1850/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1850.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1850",
"merged_at": "2021-02-09T15:16:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1850.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1850"
} | true | [
"Cleaned-up version of previous PR: https://github.com/huggingface/datasets/pull/1129",
"@lhoestq FYI",
"Before merging I might tweak a little bit the dummy data to avoid having to check if the `document_parses` and `embeddings` directories exist or not. I'll do that later today",
"Looks all good now ! Thanks a lot @ggdupont :)\r\nMerging"
] |
https://api.github.com/repos/huggingface/datasets/issues/4376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4376/comments | https://api.github.com/repos/huggingface/datasets/issues/4376/events | https://github.com/huggingface/datasets/issues/4376 | 1,242,218,144 | I_kwDODunzps5KCr6g | 4,376 | irc_disentagle viewer error | [] | closed | false | null | 5 | 2022-05-19T19:15:16Z | 2023-01-12T16:56:13Z | 2022-06-02T08:20:00Z | null | the dataviewer shows this message for "ubuntu" - "train", "test", and "validation" splits:
```
Server error
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
```
it appears to give the same message for the "channel_two" data as well.
I get a Checksums error when using `load_data()` with this dataset. Even with the `download_mode` and `ignore_verifications` options set. i referenced the issue here: https://github.com/huggingface/datasets/issues/3807 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4376/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4376/timeline | null | completed | null | null | false | [
"DUPLICATED comment from https://github.com/huggingface/datasets/issues/3807:\r\n\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\nhowever, it produces the same error\r\n```\r\n[38](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=37) if len(bad_urls) > 0:\r\n [39](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=38) error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> [40](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=39) raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n [41](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=40) logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/jkkummerfeld/irc-disentanglement/tarball/master']\r\n```\r\nI attempted to use the `ignore_verifications' as such:\r\n\r\n```\r\nds = datasets.load_dataset('irc_disentangle', download_mode=\"force_redownload\", ignore_verifications=True)\r\n\r\nDownloading builder script: 12.0kB [00:00, 5.92MB/s] \r\nDownloading metadata: 7.58kB [00:00, 3.48MB/s] \r\nNo config specified, defaulting to: irc_disentangle/ubuntu\r\nDownloading and preparing dataset irc_disentangle/ubuntu (download: 112.98 MiB, generated: 60.05 MiB, post-processed: Unknown size, total: 173.03 MiB) to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5...\r\nDownloading data: 118MB [00:09, 12.1MB/s] \r\n \r\nDataset irc_disentangle downloaded and prepared to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5. Subsequent calls will reuse this data.\r\n100%|██████████| 3/3 [00:00<00:00, 675.38it/s]\r\n```\r\nbut, this returns an empty set?\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\nnot sure what else to try at this point?\r\nThanks in advanced🤗",
"Thanks for reporting, @labouz. I'm addressing it. ",
"The issue with checksum and empty dataset has been fixed by:\r\n- #4377\r\n\r\nTo load the dataset, you should force the re-generation of the dataset from the downloaded file by passing `download_mode=\"reuse_cache_if_exists\"` to `load_dataset`.\r\n\r\nIn relation with the issue with the dataset viewer, first the dataset should be refactored to support streaming.",
"parfait!\r\nit works now, thank you 🙏 ",
"Hi there, \r\nI see this issue is closed, but I am wondering if there is any chance the source files have been moved since this fix? I am stumbling into the same NonMatchingChecksumError noted by lebouz's second post once 118MB of data has been downloaded, and have tried the solutions noted in the various fix checksum posts linked here and in other posts regarding passing in \"reuse_cache_if_exists\" to download_mode. Any suggestions? Thank you!\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/611/comments | https://api.github.com/repos/huggingface/datasets/issues/611/events | https://github.com/huggingface/datasets/issues/611 | 698,863,988 | MDU6SXNzdWU2OTg4NjM5ODg= | 611 | ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 | [] | closed | false | null | 6 | 2020-09-11T05:29:12Z | 2022-06-01T15:11:43Z | 2022-06-01T15:11:43Z | null | Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most recent call last)
<ipython-input-7-146b6b495963> in <module>
----> 1 dataset = Dataset.from_pandas(emb)
~/miniconda3/envs/dev/lib/python3.7/site-packages/nlp/arrow_dataset.py in from_pandas(cls, df, features, info, split)
223 info.features = features
224 pa_table: pa.Table = pa.Table.from_pandas(
--> 225 df=df, schema=pa.schema(features.type) if features is not None else None
226 )
227 return cls(pa_table, info=info, split=split)
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pandas()
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/pandas_compat.py in dataframe_to_arrays(df, schema, preserve_index, nthreads, columns, safe)
591 for i, maybe_fut in enumerate(arrays):
592 if isinstance(maybe_fut, futures.Future):
--> 593 arrays[i] = maybe_fut.result()
594
595 types = [x.type for x in arrays]
~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/_base.py in result(self, timeout)
426 raise CancelledError()
427 elif self._state == FINISHED:
--> 428 return self.__get_result()
429
430 self._condition.wait(timeout)
~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/thread.py in run(self)
55
56 try:
---> 57 result = self.fn(*self.args, **self.kwargs)
58 except BaseException as exc:
59 self.future.set_exception(exc)
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/pandas_compat.py in convert_column(col, field)
557
558 try:
--> 559 result = pa.array(col, type=type_, from_pandas=True, safe=safe)
560 except (pa.ArrowInvalid,
561 pa.ArrowNotImplementedError,
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._ndarray_to_array()
~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
```
My code is :
```python
from nlp import Dataset
dataset = Dataset.from_pandas(emb)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/611/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/611/timeline | null | completed | null | null | false | [
"Can you give us stats/information on your pandas DataFrame?",
"```\r\n<class 'pandas.core.frame.DataFrame'>\r\nInt64Index: 17136104 entries, 0 to 17136103\r\nData columns (total 6 columns):\r\n # Column Dtype \r\n--- ------ ----- \r\n 0 item_id int64 \r\n 1 item_titl object \r\n 2 start_price float64\r\n 3 shipping_fee float64\r\n 4 picture_url object \r\n 5 embeddings object \r\ndtypes: float64(2), int64(1), object(3)\r\nmemory usage: 915.2+ MB\r\n```",
"Thanks and some more on the `embeddings` and `picture_url` would be nice as well (type and max lengths of the elements)",
"`embedding` is `np.array` of shape `(128,)`. `picture_url` is url, such as 'https://i.ebayimg.com/00/s/MTE5OVgxNjAw/z/ZOsAAOSwAG9fHQq5/$_12.JPG?set_id=880000500F;https://i.ebayimg.com/00/s/MTE5OVgxNjAw/z/OSgAAOSwokBfHQq8/$_12.JPG?set_id=880000500F'",
"It looks like a Pyarrow limitation.\r\nI was able to reproduce the error with \r\n\r\n```python\r\nimport pandas as pd\r\nimport numpy as np\r\nimport pyarrow as pa\r\n\r\n n = 1713614\r\ndf = pd.DataFrame.from_dict({\"a\": list(np.zeros((n, 128))), \"b\": range(n)})\r\npa.Table.from_pandas(df)\r\n```\r\n\r\nI also tried with 50% of the dataframe and it actually works.\r\nI created an issue on Apache Arrow's JIRA [here](https://issues.apache.org/jira/browse/ARROW-9976)\r\n\r\nOne way to fix that would be to chunk the dataframe and concatenate arrow tables.",
"It looks like it's going to be fixed in pyarrow 2.0.0 :)\r\n\r\nIn the meantime I suggest to chunk big dataframes to create several small datasets, and then concatenate them using [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3510/comments | https://api.github.com/repos/huggingface/datasets/issues/3510/events | https://github.com/huggingface/datasets/issues/3510 | 1,091,997,004 | I_kwDODunzps5BFo1M | 3,510 | `wiki_dpr` details for Open Domain Question Answering tasks | [] | closed | false | null | 2 | 2022-01-02T11:04:01Z | 2022-02-17T13:46:20Z | 2022-02-17T13:46:20Z | null | Hey guys!
Thanks for creating the `wiki_dpr` dataset!
I am currently trying to use the dataset for context retrieval using DPR on NQ questions and need details about what each of the files and data instances mean, which version of the Wikipedia dump it uses, etc. Please respond at your earliest convenience regarding the same! Thanks a ton!
P.S.: (If one of @thomwolf @lewtun @lhoestq could respond, that would be even better since they have the first-hand details of the dataset. If anyone else has those, please reach out! Thanks!) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3510/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3510/timeline | null | completed | null | null | false | [
"Hi ! According to the DPR paper, the wikipedia dump is the one from Dec. 20, 2018.\r\nEach instance contains a paragraph of at most 100 word, as well as the title of the wikipedia page it comes from and the DPR embedding (a 768-d vector).",
"Closed by:\r\n- #3534"
] |
https://api.github.com/repos/huggingface/datasets/issues/4653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4653/comments | https://api.github.com/repos/huggingface/datasets/issues/4653/events | https://github.com/huggingface/datasets/issues/4653 | 1,296,702,834 | I_kwDODunzps5NSh1y | 4,653 | Add Altlex dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2022-07-07T02:23:02Z | 2022-07-14T02:12:39Z | 2022-07-14T02:12:39Z | null | ## Adding a Dataset
- **Name:** *Altlex*
- **Description:** *Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles.”*
- **Paper:** *https://aclanthology.org/P16-1135.pdf*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/altlex.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4653/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4653/timeline | null | completed | null | null | false | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/altlex)."
] |
https://api.github.com/repos/huggingface/datasets/issues/591 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/591/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/591/comments | https://api.github.com/repos/huggingface/datasets/issues/591/events | https://github.com/huggingface/datasets/pull/591 | 696,530,413 | MDExOlB1bGxSZXF1ZXN0NDgyNjAxMzc1 | 591 | fix #589 (backward compat) | [] | closed | false | null | 0 | 2020-09-09T07:33:13Z | 2020-09-09T08:57:56Z | 2020-09-09T08:57:55Z | null | Fix #589 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/591/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/591/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/591.diff",
"html_url": "https://github.com/huggingface/datasets/pull/591",
"merged_at": "2020-09-09T08:57:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/591.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/591"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1019/comments | https://api.github.com/repos/huggingface/datasets/issues/1019/events | https://github.com/huggingface/datasets/pull/1019 | 755,582,090 | MDExOlB1bGxSZXF1ZXN0NTMxMjY2NzAz | 1,019 | Add caWaC dataset | [] | closed | false | null | 0 | 2020-12-02T20:18:55Z | 2020-12-03T14:47:09Z | 2020-12-03T14:47:09Z | null | Add dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1019/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1019/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1019.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1019",
"merged_at": "2020-12-03T14:47:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1019.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1019"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/720/comments | https://api.github.com/repos/huggingface/datasets/issues/720/events | https://github.com/huggingface/datasets/issues/720 | 716,581,266 | MDU6SXNzdWU3MTY1ODEyNjY= | 720 | OSError: Cannot find data file when not using the dummy dataset in RAG | [] | closed | false | null | 3 | 2020-10-07T14:27:13Z | 2020-12-23T14:04:31Z | 2020-12-23T14:04:31Z | null | ## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour:
```
import os
os.environ['HF_DATASETS_CACHE'] = '/workspace/notebooks/POCs/cache'
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq")
retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
```
Plese note that I'm using the whole dataset: **use_dummy_dataset=False**
After around 4 hours (downloading and some other things) this is returned:
```
Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /workspace/notebooks/POCs/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2...
---------------------------------------------------------------------------
UnpicklingError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
459 try:
--> 460 return pickle.load(fid, **pickle_kwargs)
461 except Exception:
UnpicklingError: pickle data was truncated
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
552 # Prepare split will record examples associated to the split
--> 553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
840 for key, record in utils.tqdm(
--> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
842 ):
/opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)
217 try:
--> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
219 # return super(tqdm...) will not catch exception
/opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)
1128 try:
-> 1129 for obj in iterable:
1130 yield obj
~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)
131 break
--> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True)
133 vec_idx = 0
/opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)
462 raise IOError(
--> 463 "Failed to interpret file %s as a pickle" % repr(file))
464 finally:
OSError: Failed to interpret file <_io.BufferedReader name='/workspace/notebooks/POCs/cache/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-10-f28df370ac47> in <module>
1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets
----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False)
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)
307 generator_tokenizer = rag_tokenizer.generator
308 return cls(
--> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer
310 )
311
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)
298 self.config = config
299 if self._init_retrieval:
--> 300 self.init_retrieval()
301
302 @classmethod
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self)
324
325 logger.info("initializing retrieval")
--> 326 self.index.init_index()
327
328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):
/opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self)
238 split=self.dataset_split,
239 index_name=self.index_name,
--> 240 dummy=self.use_dummy_dataset,
241 )
242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True)
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
609 download_config=download_config,
610 download_mode=download_mode,
--> 611 ignore_verifications=ignore_verifications,
612 )
613
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
474 if not downloaded_from_gcs:
475 self._download_and_prepare(
--> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
477 )
478 # Sync info
/opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
553 self._prepare_split(split_generator, **prepare_split_kwargs)
554 except OSError:
--> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or ""))
556
557 if verify_infos:
OSError: Cannot find data file.
```
Thanks
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/720/timeline | null | completed | null | null | false | [
"Same issue here. I will be digging further, but it looks like the [script](https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet. \r\n\r\n```\r\n99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock\r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nUnpicklingError Traceback (most recent call last)\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 446 try:\r\n--> 447 return pickle.load(fid, **pickle_kwargs)\r\n 448 except Exception:\r\n\r\nUnpicklingError: pickle data was truncated\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n~/src/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 559 \r\n--> 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n\r\n~/src/datasets/src/datasets/builder.py in _prepare_split(self, split_generator)\r\n 847 writer.write(example)\r\n--> 848 finally:\r\n 849 num_examples, num_bytes = writer.finalize()\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs)\r\n 227 try:\r\n--> 228 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):\r\n 229 # return super(tqdm...) will not catch exception\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/tqdm/std.py in __iter__(self)\r\n 1132 try:\r\n-> 1133 for obj in iterable:\r\n 1134 yield obj\r\n\r\n/hdd/rag/cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files)\r\n 131 break\r\n--> 132 vecs = np.load(open(vectors_files.pop(0), \"rb\"), allow_pickle=True)\r\n 133 vec_idx = 0\r\n\r\n~/anaconda3/envs/eqa/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding)\r\n 449 raise IOError(\r\n--> 450 \"Failed to interpret file %s as a pickle\" % repr(file))\r\n 451 \r\n\r\nOSError: Failed to interpret file <_io.BufferedReader name='/hdd/rag/downloads/99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498'> as a pickle\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nOSError Traceback (most recent call last)\r\n<ipython-input-8-24351ff8ce44> in <module>\r\n 4 retriever = RagRetriever.from_pretrained(\"facebook/rag-sequence-nq\", \r\n 5 index_name=\"exact\",\r\n----> 6 use_dummy_dataset=False)\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs)\r\n 321 generator_tokenizer = rag_tokenizer.generator\r\n 322 return cls(\r\n--> 323 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer\r\n 324 )\r\n 325 \r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer)\r\n 310 self.config = config\r\n 311 if self._init_retrieval:\r\n--> 312 self.init_retrieval()\r\n 313 \r\n 314 @classmethod\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in init_retrieval(self)\r\n 338 \r\n 339 logger.info(\"initializing retrieval\")\r\n--> 340 self.index.init_index()\r\n 341 \r\n 342 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None):\r\n\r\n~/src/transformers/src/transformers/retrieval_rag.py in init_index(self)\r\n 248 split=self.dataset_split,\r\n 249 index_name=self.index_name,\r\n--> 250 dummy=self.use_dummy_dataset,\r\n 251 )\r\n 252 self.dataset.set_format(\"numpy\", columns=[\"embeddings\"], output_all_columns=True)\r\n\r\n~/src/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)\r\n 615 builder_instance.download_and_prepare(\r\n 616 download_config=download_config,\r\n--> 617 download_mode=download_mode,\r\n 618 ignore_verifications=ignore_verifications,\r\n 619 )\r\n\r\n~/src/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)\r\n 481 # Sync info\r\n 482 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())\r\n--> 483 self.info.download_checksums = dl_manager.get_recorded_sizes_checksums()\r\n 484 self.info.size_in_bytes = self.info.dataset_size + self.info.download_size\r\n 485 # Save info\r\n\r\n~/src/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 560 if verify_infos:\r\n 561 verify_splits(self.info.splits, split_dict)\r\n--> 562 \r\n 563 # Update the info object with the splits.\r\n 564 self.info.splits = split_dict\r\n\r\nOSError: Cannot find data file.\r\n```\r\n\r\nThank you.",
"An update on my end. This seems like a transient issue. Reran the script from scratch overnight with no errors. ",
"Closing this one. Feel free to re-open if you have other questions about this issue"
] |
https://api.github.com/repos/huggingface/datasets/issues/5055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5055/comments | https://api.github.com/repos/huggingface/datasets/issues/5055/events | https://github.com/huggingface/datasets/pull/5055 | 1,394,503,844 | PR_kwDODunzps5ACyVU | 5,055 | Fix backward compatibility for dataset_infos.json | [] | closed | false | null | 1 | 2022-10-03T10:30:14Z | 2022-10-03T13:43:55Z | 2022-10-03T13:41:32Z | null | While working on https://github.com/huggingface/datasets/pull/5018 I noticed a small bug introduced in #4926 regarding backward compatibility for dataset_infos.json
Indeed, when a dataset repo had both dataset_infos.json and README.md, the JSON file was ignored. This is unexpected: in practice it should be ignored only if the README.md has a dataset_info field, which has precedence over the data in the JSON file. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5055/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5055/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5055.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5055",
"merged_at": "2022-10-03T13:41:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5055.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5055"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/151/comments | https://api.github.com/repos/huggingface/datasets/issues/151/events | https://github.com/huggingface/datasets/pull/151 | 619,968,480 | MDExOlB1bGxSZXF1ZXN0NDE5MzA2MTYz | 151 | Fix JSON tests. | [] | closed | false | null | 0 | 2020-05-18T07:17:38Z | 2020-05-18T07:21:52Z | 2020-05-18T07:21:51Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/151/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/151",
"merged_at": "2020-05-18T07:21:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/151"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/3010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3010/comments | https://api.github.com/repos/huggingface/datasets/issues/3010/events | https://github.com/huggingface/datasets/issues/3010 | 1,014,918,470 | I_kwDODunzps48fm1G | 3,010 | Chain filtering is leaking | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2021-10-04T09:04:55Z | 2022-06-01T17:36:44Z | 2022-06-01T17:36:44Z | null | ## Describe the bug
As there's no support for lists within dataset fields, I convert my lists to json-string format. However, the bug described is occurring even when the data format is 'string'.
These samples show that filtering behavior diverges from what's expected when chaining filterings.
On sample 2 the second filtering leads to "leaking" of data that should've been filtered on the first filtering into the results.
## Steps to reproduce the bug
Sample 1:
```python
import datasets
import json
items = [[1, 2], [3], [4]]
jsoned_items = map(json.dumps, [[1, 2], [3], [4]])
ds = datasets.Dataset.from_dict({'a': jsoned_items})
print(list(ds))
# > Prints: [{'a': '[1, 2]'}, {'a': '[3]'}, {'a': '[4]'}] as expected
filtered = ds
# get all lists that are shorter than 2
filtered = filtered.filter(lambda x: len(json.loads(x['a'])) < 2, load_from_cache_file=False)
print(list(filtered))
# > Prints: [{'a': '[3]'}, {'a': '[4]'}] as expected
# get all lists, which have a value bigger than 3 on its zero index
filtered = filtered.filter(lambda x: json.loads(x['a'])[0] > 3, load_from_cache_file=False)
print(list(filtered))
# > Should be: [{'a': [4]}]
# > Prints: [{'a': [3]}]
```
Sample 2:
```python
import datasets
import json
items = [[1, 2], [3], [4]]
jsoned_items = map(json.dumps, [[1, 2], [3], [4]])
ds = datasets.Dataset.from_dict({'a': jsoned_items})
print(list(ds))
# > Prints: [{'a': '[1, 2]'}, {'a': '[3]'}, {'a': '[4]'}]
filtered = ds
# get all lists, which have a value bigger than 3 on its zero index
filtered = filtered.filter(lambda x: json.loads(x['a'])[0] > 3, load_from_cache_file=False)
print(list(filtered))
# > Prints: [{'a': '[4]'}] as expected
# get all lists that are shorter than 2
filtered = filtered.filter(lambda x: len(json.loads(x['a'])) < 2, load_from_cache_file=False)
print(list(filtered))
# > Prints: [{'a': '[1, 2]'}]
# > Should be: [{'a': '[4]'}] (remain intact)
```
## Expected results
Expected and actual results are attached to the code snippets.
## Actual results
Expected and actual results are attached to the code snippets.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3010/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3010/timeline | null | completed | null | null | false | [
"### Update:\r\nI wrote a bit cleaner code snippet (without transforming to json) that can expose leaking.\r\n```python\r\nimport datasets\r\nimport json\r\n\r\nitems = ['ab', 'c', 'df']\r\n\r\nds = datasets.Dataset.from_dict({'col': items})\r\nprint(list(ds))\r\n# > Prints: [{'col': 'ab'}, {'col': 'c'}, {'col': 'df'}]\r\n\r\nfiltered = ds\r\n\r\n# get all items that are starting with a character with ascii code bigger than 'a'\r\nfiltered = filtered.filter(lambda x: x['col'][0] > 'a', load_from_cache_file=False)\r\nprint(list(filtered))\r\n# > Prints: [{'col': 'c'}, {'col': 'df'}] as expected\r\n\r\n# get all items that are shorter than 2\r\nfiltered = filtered.filter(lambda x: len(x['col']) < 2, load_from_cache_file=False)\r\nprint(list(filtered))\r\n# > Prints: [{'col': 'ab'}] -> this is a leaked item from the first filter\r\n# > Should be: [{'col': 'c'}]\r\n```",
"Thanks for reporting. I'm looking into it",
"I just pushed a fix ! We'll do a new release soon.\r\nIn the meantime feel free to install `datasets` from source to play with it",
"Thanks, I'm already using it from your branch!"
] |
https://api.github.com/repos/huggingface/datasets/issues/6002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6002/comments | https://api.github.com/repos/huggingface/datasets/issues/6002/events | https://github.com/huggingface/datasets/pull/6002 | 1,786,053,060 | PR_kwDODunzps5UhP-Z | 6,002 | Add KLUE-MRC metrics | [] | closed | false | null | 1 | 2023-07-03T12:11:10Z | 2023-07-09T11:57:20Z | 2023-07-09T11:57:20Z | null | ## Metrics for KLUE-MRC (Korean Language Understanding Evaluation — Machine Reading Comprehension)
Adding metrics for [KLUE-MRC](https://huggingface.co/datasets/klue).
KLUE-MRC is very similar to SQuAD 2.0 but has a slightly different format which is why I added metrics for KLUE-MRC.
Specifically, in the case of [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness), it leverages the scoring script of SQuAD to evaluate SQuAD 2.0 and KorQuAD. But the script isn't suitable for KLUE-MRC because KLUE-MRC is a bit different from SQuAD 2.0. And this is why I added the scoring script for KLUE-MRC.
- [x] All tests passed
- [x] Added a metric card (referred the metric card of SQuAD 2.0)
- [x] Compatibility test with [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) passed
### References
- [KLUE: Korean Language Understanding Evaluation](https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/98dce83da57b0395e163467c9dae521b-Paper-round2.pdf)
- [KLUE on Hugging Face Datasets](https://huggingface.co/datasets/klue)
- #2416 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6002/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6002/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6002.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6002",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6002.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6002"
} | true | [
"The metrics API in `datasets` is deprecated as of version 2.0, and `evaulate` is our new library for metrics. You can add a new metric to it by following [these steps](https://huggingface.co/docs/evaluate/creating_and_sharing)."
] |
https://api.github.com/repos/huggingface/datasets/issues/5446 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5446/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5446/comments | https://api.github.com/repos/huggingface/datasets/issues/5446/events | https://github.com/huggingface/datasets/pull/5446 | 1,550,591,588 | PR_kwDODunzps5IMyka | 5,446 | test v0.12.0.rc0 | [] | closed | false | null | 5 | 2023-01-20T10:05:19Z | 2023-01-20T10:43:22Z | 2023-01-20T10:13:48Z | null | DO NOT MERGE.
Only to test the CI.
cc @lhoestq @albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5446/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5446/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5446.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5446",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5446.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5446"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@Wauplin I was testing it in a dedicated branch without opening a PR: https://github.com/huggingface/datasets/commits/test-hfh-0.12.0rc0",
"Oops, sorry @albertvillanova. I thought for next time I'll start the CIs before pinging everyone.\r\nI'm closing this one.",
"@Wauplin in your Slack message, you asked people from every major dependent library to check that our CI work. That is why I am checking it... :)\r\n\r\nAlso, I think for this purpose it is better to test it in a dedicated branch, rather than opening and closing a PR.",
"Yes, yes I know. Completely my fault on this one"
] |
https://api.github.com/repos/huggingface/datasets/issues/3750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3750/comments | https://api.github.com/repos/huggingface/datasets/issues/3750/events | https://github.com/huggingface/datasets/issues/3750 | 1,142,408,331 | I_kwDODunzps5EF8SL | 3,750 | `NonMatchingSplitsSizesError` for cats_vs_dogs dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-02-18T05:46:39Z | 2022-02-18T14:56:11Z | 2022-02-18T14:56:11Z | null | ## Describe the bug
Cannot download cats_vs_dogs dataset due to `NonMatchingSplitsSizesError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("cats_vs_dogs")
```
## Expected results
Loading is successful.
## Actual results
```
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=7503250, num_examples=23422, dataset_name='cats_vs_dogs'), 'recorded': SplitInfo(name='train', num_bytes=7262410, num_examples=23410, dataset_name='cats_vs_dogs')}]
```
## Environment info
Reproduced on a fresh [Colab notebook](https://colab.research.google.com/drive/13GTvrSJbBGvL2ybDdXCBZwATd6FOkMub?usp=sharing).
## Additional Context
Originally reported in https://github.com/huggingface/transformers/issues/15698.
cc @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3750/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3750/timeline | null | completed | null | null | false | [
"Thnaks for reporting @jaketae. We are fixing it. "
] |
https://api.github.com/repos/huggingface/datasets/issues/2623 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2623/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2623/comments | https://api.github.com/repos/huggingface/datasets/issues/2623/events | https://github.com/huggingface/datasets/pull/2623 | 941,265,342 | MDExOlB1bGxSZXF1ZXN0Njg3MTk0MjM3 | 2,623 | [Metrics] added wiki_split metrics | [] | closed | false | null | 1 | 2021-07-10T14:51:50Z | 2021-07-14T14:28:13Z | 2021-07-12T22:34:31Z | null | Fixes: #2606
This pull request adds combine metrics for the wikisplit or English sentence split task
Reviewer: @patrickvonplaten | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2623/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2623/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2623.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2623",
"merged_at": "2021-07-12T22:34:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2623.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2623"
} | true | [
"Looks all good to me thanks :)\r\nJust did some minor corrections in the docstring"
] |
https://api.github.com/repos/huggingface/datasets/issues/6034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6034/comments | https://api.github.com/repos/huggingface/datasets/issues/6034/events | https://github.com/huggingface/datasets/issues/6034 | 1,804,501,361 | I_kwDODunzps5rjoFx | 6,034 | load_dataset hangs on WSL | [] | closed | false | null | 3 | 2023-07-14T09:03:10Z | 2023-07-14T14:48:29Z | 2023-07-14T14:48:29Z | null | ### Describe the bug
load_dataset simply hangs. It happens once every ~5 times, and interestingly hangs for a multiple of 5 minutes (hangs for 5/10/15 minutes). Using the profiler in PyCharm shows that it spends the time at <method 'connect' of '_socket.socket' objects>. However, a local cache is available so I am not sure why socket is needed. ([profiler result](https://ibb.co/0Btbbp8))
It only happens on WSL for me. It works for native Windows and my MacBook. (cache quickly recognized and loaded within a second).
### Steps to reproduce the bug
I am using Ubuntu 22.04.2 LTS (GNU/Linux 5.15.90.1-microsoft-standard-WSL2 x86_64)
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] on linux
>>> import datasets
>>> datasets.load_dataset('ai2_arc', 'ARC-Challenge') # hangs for 5/10/15 minutes
### Expected behavior
cache quickly recognized and loaded within a second
### Environment info
Please let me know if I should provide more environment information. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6034/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6034/timeline | null | completed | null | null | false | [
"Even if a dataset is cached, we still make requests to check whether the cache is up-to-date. [This](https://huggingface.co/docs/datasets/v2.13.1/en/loading#offline) section in the docs explains how to avoid them and directly load the cached version.",
"Thanks - that works! However it doesn't resolve the original issue (but I am not sure if it is a WSL problem)",
"We use `requests` to make HTTP requests (and `aiohttp` in the streaming mode), so I don't think we can provide much help regarding the socket issue (it probably has something to do with WSL). "
] |
https://api.github.com/repos/huggingface/datasets/issues/3077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3077/comments | https://api.github.com/repos/huggingface/datasets/issues/3077/events | https://github.com/huggingface/datasets/pull/3077 | 1,026,150,362 | PR_kwDODunzps4tMFPG | 3,077 | Fix loading a metric with internal import | [] | closed | false | null | 0 | 2021-10-14T09:06:58Z | 2021-10-14T09:14:56Z | 2021-10-14T09:14:55Z | null | After refactoring the module factory (#2986), a bug was introduced when loading metrics with internal imports.
This PR adds a new test case and fixes this bug.
Fix #3076.
CC: @sgugger @merveenoyan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3077/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3077/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3077.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3077",
"merged_at": "2021-10-14T09:14:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3077.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3077"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2733/comments | https://api.github.com/repos/huggingface/datasets/issues/2733/events | https://github.com/huggingface/datasets/pull/2733 | 956,725,476 | MDExOlB1bGxSZXF1ZXN0NzAwMjc1NDMy | 2,733 | Add missing parquet known extension | [] | closed | false | null | 0 | 2021-07-30T13:01:20Z | 2021-07-30T13:24:31Z | 2021-07-30T13:24:30Z | null | This code was failing because the parquet extension wasn't recognized:
```python
from datasets import load_dataset
base_url = "https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/"
data_files = {"train": base_url + "wikipedia-train.parquet"}
wiki = load_dataset("parquet", data_files=data_files, split="train", streaming=True)
```
It raises
```python
NotImplementedError: Extraction protocol for file at https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/wikipedia-train.parquet is not implemented yet
```
I added `parquet` to the list of known extensions
EDIT: added pickle, conllu, xml extensions as well | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2733/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2733/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2733.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2733",
"merged_at": "2021-07-30T13:24:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2733.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2733"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2758/comments | https://api.github.com/repos/huggingface/datasets/issues/2758/events | https://github.com/huggingface/datasets/pull/2758 | 960,206,575 | MDExOlB1bGxSZXF1ZXN0NzAzMjQ5Nzky | 2,758 | Raise ManualDownloadError when loading a dataset that requires previous manual download | [] | closed | false | null | 0 | 2021-08-04T10:19:55Z | 2021-08-04T11:36:30Z | 2021-08-04T11:36:30Z | null | This PR implements the raising of a `ManualDownloadError` when loading a dataset that requires previous manual download, and this is missing.
The `ManualDownloadError` is raised whether the dataset is loaded in normal or streaming mode.
Close #2749.
cc: @severo | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2758/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2758.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2758",
"merged_at": "2021-08-04T11:36:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2758.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2758"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3637/comments | https://api.github.com/repos/huggingface/datasets/issues/3637/events | https://github.com/huggingface/datasets/issues/3637 | 1,115,526,438 | I_kwDODunzps5CfZUm | 3,637 | [TypeError: Couldn't cast array of type] Cannot load dataset in v1.18 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2022-01-26T21:38:02Z | 2022-02-09T16:15:53Z | 2022-02-09T16:15:53Z | null | ## Describe the bug
I am trying to load the [`GEM/RiSAWOZ` dataset](https://huggingface.co/datasets/GEM/RiSAWOZ) in `datasets` v1.18.1 and am running into a type error when casting the features. The strange thing is that I can load the dataset with v1.17.0. Note that the error is also present if I install from `master` too.
As far as I can tell, the dataset loading script is correct and the problematic features [here](https://huggingface.co/datasets/GEM/RiSAWOZ/blob/main/RiSAWOZ.py#L237) also look fine to me.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dset = load_dataset("GEM/RiSAWOZ")
```
## Expected results
I can load the dataset without error.
## Actual results
<details><summary>Traceback</summary>
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
1083 example = self.info.features.encode_example(record)
-> 1084 writer.write(example, key)
1085 finally:
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write(self, example, key, writer_batch_size)
445
--> 446 self.write_examples_on_file()
447
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_examples_on_file(self)
403 batch_examples[col] = [row[0][col] for row in self.current_examples]
--> 404 self.write_batch(batch_examples=batch_examples)
405 self.current_examples = []
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
496 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 497 arrays.append(pa.array(typed_sequence))
498 inferred_features[col] = typed_sequence.get_inferred_type()
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
204 # We only do it if trying_type is False - since this is what the user asks for.
--> 205 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
206 return out
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
919 else:
--> 920 return func(array, *args, **kwargs)
921
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1064 if isinstance(feature, list):
-> 1065 return pa.ListArray.from_arrays(array.offsets, _c(array.values, feature[0]))
1066 elif isinstance(feature, Sequence):
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
919 else:
--> 920 return func(array, *args, **kwargs)
921
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0)
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
919 else:
--> 920 return func(array, *args, **kwargs)
921
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0)
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
919 else:
--> 920 return func(array, *args, **kwargs)
921
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1086 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
-> 1087 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
1088
TypeError: Couldn't cast array of type
struct<医院-3.0T MRI: string, 医院-CT: string, 医院-DSA: string, 医院-公交线路: string, 医院-区域: string, 医院-名称: string, 医院-地址: string, 医院-地铁可达: string, 医院-地铁线路: string, 医院-性质: string, 医院-挂号时间: string, 医院-电话: string, 医院-等级: string, 医院-类别: string, 医院-重点科室: string, 医院-门诊时间: string, 天气-城市: string, 天气-天气: string, 天气-日期: string, 天气-温度: string, 天气-紫外线强度: string, 天气-风力风向: string, 旅游景点-区域: string, 旅游景点-名称: string, 旅游景点-地址: string, 旅游景点-开放时间: string, 旅游景点-是否地铁直达: string, 旅游景点-景点类型: string, 旅游景点-最适合人群: string, 旅游景点-消费: string, 旅游景点-特点: string, 旅游景点-电话号码: string, 旅游景点-评分: string, 旅游景点-门票价格: string, 汽车-价格(万元): string, 汽车-倒车影像: string, 汽车-动力水平: string, 汽车-厂商: string, 汽车-发动机排量(L): string, 汽车-发动机马力(Ps): string, 汽车-名称: string, 汽车-定速巡航: string, 汽车-巡航系统: string, 汽车-座位数: string, 汽车-座椅加热: string, 汽车-座椅通风: string, 汽车-所属价格区间: string, 汽车-油耗水平: string, 汽车-环保标准: string, 汽车-级别: string, 汽车-综合油耗(L/100km): string, 汽车-能源类型: string, 汽车-车型: string, 汽车-车系: string, 汽车-车身尺寸(mm): string, 汽车-驱动方式: string, 汽车-驾驶辅助影像: string, 火车-出发地: string, 火车-出发时间: string, 火车-到达时间: string, 火车-坐席: string, 火车-日期: string, 火车-时长: string, 火车-目的地: string, 火车-票价: string, 火车-舱位档次: string, 火车-车型: string, 火车-车次信息: string, 电影-主演: string, 电影-主演名单: string, 电影-具体上映时间: string, 电影-制片国家/地区: string, 电影-导演: string, 电影-年代: string, 电影-片名: string, 电影-片长: string, 电影-类型: string, 电影-豆瓣评分: string, 电脑-CPU: string, 电脑-CPU型号: string, 电脑-产品类别: string, 电脑-价格: string, 电脑-价格区间: string, 电脑-内存容量: string, 电脑-分类: string, 电脑-品牌: string, 电脑-商品名称: string, 电脑-屏幕尺寸: string, 电脑-待机时长: string, 电脑-显卡型号: string, 电脑-显卡类别: string, 电脑-游戏性能: string, 电脑-特性: string, 电脑-硬盘容量: string, 电脑-系列: string, 电脑-系统: string, 电脑-色系: string, 电脑-裸机重量: string, 电视剧-主演: string, 电视剧-主演名单: string, 电视剧-制片国家/地区: string, 电视剧-单集片长: string, 电视剧-导演: string, 电视剧-年代: string, 电视剧-片名: string, 电视剧-类型: string, 电视剧-豆瓣评分: string, 电视剧-集数: string, 电视剧-首播时间: string, 辅导班-上课方式: string, 辅导班-上课时间: string, 辅导班-下课时间: string, 辅导班-价格: string, 辅导班-区域: string, 辅导班-年级: string, 辅导班-开始日期: string, 辅导班-教室地点: string, 辅导班-教师: string, 辅导班-教师网址: string, 辅导班-时段: string, 辅导班-校区: string, 辅导班-每周: string, 辅导班-班号: string, 辅导班-科目: string, 辅导班-结束日期: string, 辅导班-课时: string, 辅导班-课次: string, 辅导班-课程网址: string, 辅导班-难度: string, 通用-产品类别: string, 通用-价格区间: string, 通用-品牌: string, 通用-系列: string, 酒店-价位: string, 酒店-停车场: string, 酒店-区域: string, 酒店-名称: string, 酒店-地址: string, 酒店-房型: string, 酒店-房费: string, 酒店-星级: string, 酒店-电话号码: string, 酒店-评分: string, 酒店-酒店类型: string, 飞机-准点率: string, 飞机-出发地: string, 飞机-到达时间: string, 飞机-日期: string, 飞机-目的地: string, 飞机-票价: string, 飞机-航班信息: string, 飞机-舱位档次: string, 飞机-起飞时间: string, 餐厅-人均消费: string, 餐厅-价位: string, 餐厅-区域: string, 餐厅-名称: string, 餐厅-地址: string, 餐厅-推荐菜: string, 餐厅-是否地铁直达: string, 餐厅-电话号码: string, 餐厅-菜系: string, 餐厅-营业时间: string, 餐厅-评分: string>
to
{'旅游景点-名称': Value(dtype='string', id=None), '旅游景点-区域': Value(dtype='string', id=None), '旅游景点-景点类型': Value(dtype='string', id=None), '旅游景点-最适合人群': Value(dtype='string', id=None), '旅游景点-消费': Value(dtype='string', id=None), '旅游景点-是否地铁直达': Value(dtype='string', id=None), '旅游景点-门票价格': Value(dtype='string', id=None), '旅游景点-电话号码': Value(dtype='string', id=None), '旅游景点-地址': Value(dtype='string', id=None), '旅游景点-评分': Value(dtype='string', id=None), '旅游景点-开放时间': Value(dtype='string', id=None), '旅游景点-特点': Value(dtype='string', id=None), '餐厅-名称': Value(dtype='string', id=None), '餐厅-区域': Value(dtype='string', id=None), '餐厅-菜系': Value(dtype='string', id=None), '餐厅-价位': Value(dtype='string', id=None), '餐厅-是否地铁直达': Value(dtype='string', id=None), '餐厅-人均消费': Value(dtype='string', id=None), '餐厅-地址': Value(dtype='string', id=None), '餐厅-电话号码': Value(dtype='string', id=None), '餐厅-评分': Value(dtype='string', id=None), '餐厅-营业时间': Value(dtype='string', id=None), '餐厅-推荐菜': Value(dtype='string', id=None), '酒店-名称': Value(dtype='string', id=None), '酒店-区域': Value(dtype='string', id=None), '酒店-星级': Value(dtype='string', id=None), '酒店-价位': Value(dtype='string', id=None), '酒店-酒店类型': Value(dtype='string', id=None), '酒店-房型': Value(dtype='string', id=None), '酒店-停车场': Value(dtype='string', id=None), '酒店-房费': Value(dtype='string', id=None), '酒店-地址': Value(dtype='string', id=None), '酒店-电话号码': Value(dtype='string', id=None), '酒店-评分': Value(dtype='string', id=None), '电脑-品牌': Value(dtype='string', id=None), '电脑-产品类别': Value(dtype='string', id=None), '电脑-分类': Value(dtype='string', id=None), '电脑-内存容量': Value(dtype='string', id=None), '电脑-屏幕尺寸': Value(dtype='string', id=None), '电脑-CPU': Value(dtype='string', id=None), '电脑-价格区间': Value(dtype='string', id=None), '电脑-系列': Value(dtype='string', id=None), '电脑-商品名称': Value(dtype='string', id=None), '电脑-系统': Value(dtype='string', id=None), '电脑-游戏性能': Value(dtype='string', id=None), '电脑-CPU型号': Value(dtype='string', id=None), '电脑-裸机重量': Value(dtype='string', id=None), '电脑-显卡类别': Value(dtype='string', id=None), '电脑-显卡型号': Value(dtype='string', id=None), '电脑-特性': Value(dtype='string', id=None), '电脑-色系': Value(dtype='string', id=None), '电脑-待机时长': Value(dtype='string', id=None), '电脑-硬盘容量': Value(dtype='string', id=None), '电脑-价格': Value(dtype='string', id=None), '火车-出发地': Value(dtype='string', id=None), '火车-目的地': Value(dtype='string', id=None), '火车-日期': Value(dtype='string', id=None), '火车-车型': Value(dtype='string', id=None), '火车-坐席': Value(dtype='string', id=None), '火车-车次信息': Value(dtype='string', id=None), '火车-时长': Value(dtype='string', id=None), '火车-出发时间': Value(dtype='string', id=None), '火车-到达时间': Value(dtype='string', id=None), '火车-票价': Value(dtype='string', id=None), '飞机-出发地': Value(dtype='string', id=None), '飞机-目的地': Value(dtype='string', id=None), '飞机-日期': Value(dtype='string', id=None), '飞机-舱位档次': Value(dtype='string', id=None), '飞机-航班信息': Value(dtype='string', id=None), '飞机-起飞时间': Value(dtype='string', id=None), '飞机-到达时间': Value(dtype='string', id=None), '飞机-票价': Value(dtype='string', id=None), '飞机-准点率': Value(dtype='string', id=None), '天气-城市': Value(dtype='string', id=None), '天气-日期': Value(dtype='string', id=None), '天气-天气': Value(dtype='string', id=None), '天气-温度': Value(dtype='string', id=None), '天气-风力风向': Value(dtype='string', id=None), '天气-紫外线强度': Value(dtype='string', id=None), '电影-制片国家/地区': Value(dtype='string', id=None), '电影-类型': Value(dtype='string', id=None), '电影-年代': Value(dtype='string', id=None), '电影-主演': Value(dtype='string', id=None), '电影-导演': Value(dtype='string', id=None), '电影-片名': Value(dtype='string', id=None), '电影-主演名单': Value(dtype='string', id=None), '电影-具体上映时间': Value(dtype='string', id=None), '电影-片长': Value(dtype='string', id=None), '电影-豆瓣评分': Value(dtype='string', id=None), '电视剧-制片国家/地区': Value(dtype='string', id=None), '电视剧-类型': Value(dtype='string', id=None), '电视剧-年代': Value(dtype='string', id=None), '电视剧-主演': Value(dtype='string', id=None), '电视剧-导演': Value(dtype='string', id=None), '电视剧-片名': Value(dtype='string', id=None), '电视剧-主演名单': Value(dtype='string', id=None), '电视剧-首播时间': Value(dtype='string', id=None), '电视剧-集数': Value(dtype='string', id=None), '电视剧-单集片长': Value(dtype='string', id=None), '电视剧-豆瓣评分': Value(dtype='string', id=None), '辅导班-班号': Value(dtype='string', id=None), '辅导班-难度': Value(dtype='string', id=None), '辅导班-科目': Value(dtype='string', id=None), '辅导班-年级': Value(dtype='string', id=None), '辅导班-区域': Value(dtype='string', id=None), '辅导班-校区': Value(dtype='string', id=None), '辅导班-上课方式': Value(dtype='string', id=None), '辅导班-开始日期': Value(dtype='string', id=None), '辅导班-结束日期': Value(dtype='string', id=None), '辅导班-每周': Value(dtype='string', id=None), '辅导班-上课时间': Value(dtype='string', id=None), '辅导班-下课时间': Value(dtype='string', id=None), '辅导班-时段': Value(dtype='string', id=None), '辅导班-课次': Value(dtype='string', id=None), '辅导班-课时': Value(dtype='string', id=None), '辅导班-教室地点': Value(dtype='string', id=None), '辅导班-教师': Value(dtype='string', id=None), '辅导班-价格': Value(dtype='string', id=None), '辅导班-课程网址': Value(dtype='string', id=None), '辅导班-教师网址': Value(dtype='string', id=None), '汽车-名称': Value(dtype='string', id=None), '汽车-车型': Value(dtype='string', id=None), '汽车-级别': Value(dtype='string', id=None), '汽车-座位数': Value(dtype='string', id=None), '汽车-车身尺寸(mm)': Value(dtype='string', id=None), '汽车-厂商': Value(dtype='string', id=None), '汽车-能源类型': Value(dtype='string', id=None), '汽车-发动机排量(L)': Value(dtype='string', id=None), '汽车-发动机马力(Ps)': Value(dtype='string', id=None), '汽车-驱动方式': Value(dtype='string', id=None), '汽车-综合油耗(L/100km)': Value(dtype='string', id=None), '汽车-环保标准': Value(dtype='string', id=None), '汽车-驾驶辅助影像': Value(dtype='string', id=None), '汽车-巡航系统': Value(dtype='string', id=None), '汽车-价格(万元)': Value(dtype='string', id=None), '汽车-车系': Value(dtype='string', id=None), '汽车-动力水平': Value(dtype='string', id=None), '汽车-油耗水平': Value(dtype='string', id=None), '汽车-倒车影像': Value(dtype='string', id=None), '汽车-定速巡航': Value(dtype='string', id=None), '汽车-座椅加热': Value(dtype='string', id=None), '汽车-座椅通风': Value(dtype='string', id=None), '汽车-所属价格区间': Value(dtype='string', id=None), '医院-名称': Value(dtype='string', id=None), '医院-等级': Value(dtype='string', id=None), '医院-类别': Value(dtype='string', id=None), '医院-性质': Value(dtype='string', id=None), '医院-区域': Value(dtype='string', id=None), '医院-地址': Value(dtype='string', id=None), '医院-电话': Value(dtype='string', id=None), '医院-挂号时间': Value(dtype='string', id=None), '医院-门诊时间': Value(dtype='string', id=None), '医院-公交线路': Value(dtype='string', id=None), '医院-地铁可达': Value(dtype='string', id=None), '医院-地铁线路': Value(dtype='string', id=None), '医院-重点科室': Value(dtype='string', id=None), '医院-CT': Value(dtype='string', id=None), '医院-3.0T MRI': Value(dtype='string', id=None), '医院-DSA': Value(dtype='string', id=None)}
During handling of the above exception, another exception occurred:
TypeError Traceback (most recent call last)
/var/folders/28/k4cy5q7s2hs92xq7_h89_vgm0000gn/T/ipykernel_44306/2896005239.py in <module>
----> 1 dset = load_dataset("GEM/RiSAWOZ")
2 dset
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)
1692
1693 # Download and prepare data
-> 1694 builder_instance.download_and_prepare(
1695 download_config=download_config,
1696 download_mode=download_mode,
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
593 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
594 if not downloaded_from_gcs:
--> 595 self._download_and_prepare(
596 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
597 )
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
682 try:
683 # Prepare split will record examples associated to the split
--> 684 self._prepare_split(split_generator, **prepare_split_kwargs)
685 except OSError as e:
686 raise OSError(
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
1084 writer.write(example, key)
1085 finally:
-> 1086 num_examples, num_bytes = writer.finalize()
1087
1088 split_generator.split_info.num_examples = num_examples
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in finalize(self, close_stream)
525 # Re-intializing to empty list for next batch
526 self.hkey_record = []
--> 527 self.write_examples_on_file()
528 if self.pa_writer is None:
529 if self.schema:
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_examples_on_file(self)
402 # Since current_examples contains (example, key) tuples
403 batch_examples[col] = [row[0][col] for row in self.current_examples]
--> 404 self.write_batch(batch_examples=batch_examples)
405 self.current_examples = []
406
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
495 col_try_type = try_features[col] if try_features is not None and col in try_features else None
496 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 497 arrays.append(pa.array(typed_sequence))
498 inferred_features[col] = typed_sequence.get_inferred_type()
499 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib.array()
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
203 # Also, when trying type "string", we don't want to convert integers or floats to "string".
204 # We only do it if trying_type is False - since this is what the user asks for.
--> 205 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
206 return out
207 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"):
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
946 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
919 else:
--> 920 return func(array, *args, **kwargs)
921
922 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1063 # feature must be either [subfeature] or Sequence(subfeature)
1064 if isinstance(feature, list):
-> 1065 return pa.ListArray.from_arrays(array.offsets, _c(array.values, feature[0]))
1066 elif isinstance(feature, Sequence):
1067 if feature.length > -1:
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"):
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
946 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
919 else:
--> 920 return func(array, *args, **kwargs)
921
922 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1058 }
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
1062 elif pa.types.is_list(array.type):
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0)
1058 }
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
1062 elif pa.types.is_list(array.type):
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"):
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
946 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
919 else:
--> 920 return func(array, *args, **kwargs)
921
922 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1058 }
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
1062 elif pa.types.is_list(array.type):
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0)
1058 }
1059 if isinstance(feature, dict) and set(field.name for field in array.type) == set(feature):
-> 1060 arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
1061 return pa.StructArray.from_arrays(arrays, names=list(feature))
1062 elif pa.types.is_list(array.type):
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
942 if pa.types.is_list(array.type) and config.PYARROW_VERSION < version.parse("4.0.0"):
943 array = _sanitize(array)
--> 944 return func(array, *args, **kwargs)
945
946 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in wrapper(array, *args, **kwargs)
918 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
919 else:
--> 920 return func(array, *args, **kwargs)
921
922 return wrapper
~/miniconda3/envs/huggingface/lib/python3.8/site-packages/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1085 elif not isinstance(feature, (Sequence, dict, list, tuple)):
1086 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
-> 1087 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
1088
1089
TypeError: Couldn't cast array of type
struct<医院-3.0T MRI: string, 医院-CT: string, 医院-DSA: string, 医院-公交线路: string, 医院-区域: string, 医院-名称: string, 医院-地址: string, 医院-地铁可达: string, 医院-地铁线路: string, 医院-性质: string, 医院-挂号时间: string, 医院-电话: string, 医院-等级: string, 医院-类别: string, 医院-重点科室: string, 医院-门诊时间: string, 天气-城市: string, 天气-天气: string, 天气-日期: string, 天气-温度: string, 天气-紫外线强度: string, 天气-风力风向: string, 旅游景点-区域: string, 旅游景点-名称: string, 旅游景点-地址: string, 旅游景点-开放时间: string, 旅游景点-是否地铁直达: string, 旅游景点-景点类型: string, 旅游景点-最适合人群: string, 旅游景点-消费: string, 旅游景点-特点: string, 旅游景点-电话号码: string, 旅游景点-评分: string, 旅游景点-门票价格: string, 汽车-价格(万元): string, 汽车-倒车影像: string, 汽车-动力水平: string, 汽车-厂商: string, 汽车-发动机排量(L): string, 汽车-发动机马力(Ps): string, 汽车-名称: string, 汽车-定速巡航: string, 汽车-巡航系统: string, 汽车-座位数: string, 汽车-座椅加热: string, 汽车-座椅通风: string, 汽车-所属价格区间: string, 汽车-油耗水平: string, 汽车-环保标准: string, 汽车-级别: string, 汽车-综合油耗(L/100km): string, 汽车-能源类型: string, 汽车-车型: string, 汽车-车系: string, 汽车-车身尺寸(mm): string, 汽车-驱动方式: string, 汽车-驾驶辅助影像: string, 火车-出发地: string, 火车-出发时间: string, 火车-到达时间: string, 火车-坐席: string, 火车-日期: string, 火车-时长: string, 火车-目的地: string, 火车-票价: string, 火车-舱位档次: string, 火车-车型: string, 火车-车次信息: string, 电影-主演: string, 电影-主演名单: string, 电影-具体上映时间: string, 电影-制片国家/地区: string, 电影-导演: string, 电影-年代: string, 电影-片名: string, 电影-片长: string, 电影-类型: string, 电影-豆瓣评分: string, 电脑-CPU: string, 电脑-CPU型号: string, 电脑-产品类别: string, 电脑-价格: string, 电脑-价格区间: string, 电脑-内存容量: string, 电脑-分类: string, 电脑-品牌: string, 电脑-商品名称: string, 电脑-屏幕尺寸: string, 电脑-待机时长: string, 电脑-显卡型号: string, 电脑-显卡类别: string, 电脑-游戏性能: string, 电脑-特性: string, 电脑-硬盘容量: string, 电脑-系列: string, 电脑-系统: string, 电脑-色系: string, 电脑-裸机重量: string, 电视剧-主演: string, 电视剧-主演名单: string, 电视剧-制片国家/地区: string, 电视剧-单集片长: string, 电视剧-导演: string, 电视剧-年代: string, 电视剧-片名: string, 电视剧-类型: string, 电视剧-豆瓣评分: string, 电视剧-集数: string, 电视剧-首播时间: string, 辅导班-上课方式: string, 辅导班-上课时间: string, 辅导班-下课时间: string, 辅导班-价格: string, 辅导班-区域: string, 辅导班-年级: string, 辅导班-开始日期: string, 辅导班-教室地点: string, 辅导班-教师: string, 辅导班-教师网址: string, 辅导班-时段: string, 辅导班-校区: string, 辅导班-每周: string, 辅导班-班号: string, 辅导班-科目: string, 辅导班-结束日期: string, 辅导班-课时: string, 辅导班-课次: string, 辅导班-课程网址: string, 辅导班-难度: string, 通用-产品类别: string, 通用-价格区间: string, 通用-品牌: string, 通用-系列: string, 酒店-价位: string, 酒店-停车场: string, 酒店-区域: string, 酒店-名称: string, 酒店-地址: string, 酒店-房型: string, 酒店-房费: string, 酒店-星级: string, 酒店-电话号码: string, 酒店-评分: string, 酒店-酒店类型: string, 飞机-准点率: string, 飞机-出发地: string, 飞机-到达时间: string, 飞机-日期: string, 飞机-目的地: string, 飞机-票价: string, 飞机-航班信息: string, 飞机-舱位档次: string, 飞机-起飞时间: string, 餐厅-人均消费: string, 餐厅-价位: string, 餐厅-区域: string, 餐厅-名称: string, 餐厅-地址: string, 餐厅-推荐菜: string, 餐厅-是否地铁直达: string, 餐厅-电话号码: string, 餐厅-菜系: string, 餐厅-营业时间: string, 餐厅-评分: string>
to
{'旅游景点-名称': Value(dtype='string', id=None), '旅游景点-区域': Value(dtype='string', id=None), '旅游景点-景点类型': Value(dtype='string', id=None), '旅游景点-最适合人群': Value(dtype='string', id=None), '旅游景点-消费': Value(dtype='string', id=None), '旅游景点-是否地铁直达': Value(dtype='string', id=None), '旅游景点-门票价格': Value(dtype='string', id=None), '旅游景点-电话号码': Value(dtype='string', id=None), '旅游景点-地址': Value(dtype='string', id=None), '旅游景点-评分': Value(dtype='string', id=None), '旅游景点-开放时间': Value(dtype='string', id=None), '旅游景点-特点': Value(dtype='string', id=None), '餐厅-名称': Value(dtype='string', id=None), '餐厅-区域': Value(dtype='string', id=None), '餐厅-菜系': Value(dtype='string', id=None), '餐厅-价位': Value(dtype='string', id=None), '餐厅-是否地铁直达': Value(dtype='string', id=None), '餐厅-人均消费': Value(dtype='string', id=None), '餐厅-地址': Value(dtype='string', id=None), '餐厅-电话号码': Value(dtype='string', id=None), '餐厅-评分': Value(dtype='string', id=None), '餐厅-营业时间': Value(dtype='string', id=None), '餐厅-推荐菜': Value(dtype='string', id=None), '酒店-名称': Value(dtype='string', id=None), '酒店-区域': Value(dtype='string', id=None), '酒店-星级': Value(dtype='string', id=None), '酒店-价位': Value(dtype='string', id=None), '酒店-酒店类型': Value(dtype='string', id=None), '酒店-房型': Value(dtype='string', id=None), '酒店-停车场': Value(dtype='string', id=None), '酒店-房费': Value(dtype='string', id=None), '酒店-地址': Value(dtype='string', id=None), '酒店-电话号码': Value(dtype='string', id=None), '酒店-评分': Value(dtype='string', id=None), '电脑-品牌': Value(dtype='string', id=None), '电脑-产品类别': Value(dtype='string', id=None), '电脑-分类': Value(dtype='string', id=None), '电脑-内存容量': Value(dtype='string', id=None), '电脑-屏幕尺寸': Value(dtype='string', id=None), '电脑-CPU': Value(dtype='string', id=None), '电脑-价格区间': Value(dtype='string', id=None), '电脑-系列': Value(dtype='string', id=None), '电脑-商品名称': Value(dtype='string', id=None), '电脑-系统': Value(dtype='string', id=None), '电脑-游戏性能': Value(dtype='string', id=None), '电脑-CPU型号': Value(dtype='string', id=None), '电脑-裸机重量': Value(dtype='string', id=None), '电脑-显卡类别': Value(dtype='string', id=None), '电脑-显卡型号': Value(dtype='string', id=None), '电脑-特性': Value(dtype='string', id=None), '电脑-色系': Value(dtype='string', id=None), '电脑-待机时长': Value(dtype='string', id=None), '电脑-硬盘容量': Value(dtype='string', id=None), '电脑-价格': Value(dtype='string', id=None), '火车-出发地': Value(dtype='string', id=None), '火车-目的地': Value(dtype='string', id=None), '火车-日期': Value(dtype='string', id=None), '火车-车型': Value(dtype='string', id=None), '火车-坐席': Value(dtype='string', id=None), '火车-车次信息': Value(dtype='string', id=None), '火车-时长': Value(dtype='string', id=None), '火车-出发时间': Value(dtype='string', id=None), '火车-到达时间': Value(dtype='string', id=None), '火车-票价': Value(dtype='string', id=None), '飞机-出发地': Value(dtype='string', id=None), '飞机-目的地': Value(dtype='string', id=None), '飞机-日期': Value(dtype='string', id=None), '飞机-舱位档次': Value(dtype='string', id=None), '飞机-航班信息': Value(dtype='string', id=None), '飞机-起飞时间': Value(dtype='string', id=None), '飞机-到达时间': Value(dtype='string', id=None), '飞机-票价': Value(dtype='string', id=None), '飞机-准点率': Value(dtype='string', id=None), '天气-城市': Value(dtype='string', id=None), '天气-日期': Value(dtype='string', id=None), '天气-天气': Value(dtype='string', id=None), '天气-温度': Value(dtype='string', id=None), '天气-风力风向': Value(dtype='string', id=None), '天气-紫外线强度': Value(dtype='string', id=None), '电影-制片国家/地区': Value(dtype='string', id=None), '电影-类型': Value(dtype='string', id=None), '电影-年代': Value(dtype='string', id=None), '电影-主演': Value(dtype='string', id=None), '电影-导演': Value(dtype='string', id=None), '电影-片名': Value(dtype='string', id=None), '电影-主演名单': Value(dtype='string', id=None), '电影-具体上映时间': Value(dtype='string', id=None), '电影-片长': Value(dtype='string', id=None), '电影-豆瓣评分': Value(dtype='string', id=None), '电视剧-制片国家/地区': Value(dtype='string', id=None), '电视剧-类型': Value(dtype='string', id=None), '电视剧-年代': Value(dtype='string', id=None), '电视剧-主演': Value(dtype='string', id=None), '电视剧-导演': Value(dtype='string', id=None), '电视剧-片名': Value(dtype='string', id=None), '电视剧-主演名单': Value(dtype='string', id=None), '电视剧-首播时间': Value(dtype='string', id=None), '电视剧-集数': Value(dtype='string', id=None), '电视剧-单集片长': Value(dtype='string', id=None), '电视剧-豆瓣评分': Value(dtype='string', id=None), '辅导班-班号': Value(dtype='string', id=None), '辅导班-难度': Value(dtype='string', id=None), '辅导班-科目': Value(dtype='string', id=None), '辅导班-年级': Value(dtype='string', id=None), '辅导班-区域': Value(dtype='string', id=None), '辅导班-校区': Value(dtype='string', id=None), '辅导班-上课方式': Value(dtype='string', id=None), '辅导班-开始日期': Value(dtype='string', id=None), '辅导班-结束日期': Value(dtype='string', id=None), '辅导班-每周': Value(dtype='string', id=None), '辅导班-上课时间': Value(dtype='string', id=None), '辅导班-下课时间': Value(dtype='string', id=None), '辅导班-时段': Value(dtype='string', id=None), '辅导班-课次': Value(dtype='string', id=None), '辅导班-课时': Value(dtype='string', id=None), '辅导班-教室地点': Value(dtype='string', id=None), '辅导班-教师': Value(dtype='string', id=None), '辅导班-价格': Value(dtype='string', id=None), '辅导班-课程网址': Value(dtype='string', id=None), '辅导班-教师网址': Value(dtype='string', id=None), '汽车-名称': Value(dtype='string', id=None), '汽车-车型': Value(dtype='string', id=None), '汽车-级别': Value(dtype='string', id=None), '汽车-座位数': Value(dtype='string', id=None), '汽车-车身尺寸(mm)': Value(dtype='string', id=None), '汽车-厂商': Value(dtype='string', id=None), '汽车-能源类型': Value(dtype='string', id=None), '汽车-发动机排量(L)': Value(dtype='string', id=None), '汽车-发动机马力(Ps)': Value(dtype='string', id=None), '汽车-驱动方式': Value(dtype='string', id=None), '汽车-综合油耗(L/100km)': Value(dtype='string', id=None), '汽车-环保标准': Value(dtype='string', id=None), '汽车-驾驶辅助影像': Value(dtype='string', id=None), '汽车-巡航系统': Value(dtype='string', id=None), '汽车-价格(万元)': Value(dtype='string', id=None), '汽车-车系': Value(dtype='string', id=None), '汽车-动力水平': Value(dtype='string', id=None), '汽车-油耗水平': Value(dtype='string', id=None), '汽车-倒车影像': Value(dtype='string', id=None), '汽车-定速巡航': Value(dtype='string', id=None), '汽车-座椅加热': Value(dtype='string', id=None), '汽车-座椅通风': Value(dtype='string', id=None), '汽车-所属价格区间': Value(dtype='string', id=None), '医院-名称': Value(dtype='string', id=None), '医院-等级': Value(dtype='string', id=None), '医院-类别': Value(dtype='string', id=None), '医院-性质': Value(dtype='string', id=None), '医院-区域': Value(dtype='string', id=None), '医院-地址': Value(dtype='string', id=None), '医院-电话': Value(dtype='string', id=None), '医院-挂号时间': Value(dtype='string', id=None), '医院-门诊时间': Value(dtype='string', id=None), '医院-公交线路': Value(dtype='string', id=None), '医院-地铁可达': Value(dtype='string', id=None), '医院-地铁线路': Value(dtype='string', id=None), '医院-重点科室': Value(dtype='string', id=None), '医院-CT': Value(dtype='string', id=None), '医院-3.0T MRI': Value(dtype='string', id=None), '医院-DSA': Value(dtype='string', id=None)}
```
</details>
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3637/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3637/timeline | null | completed | null | null | false | [
"Hi @lewtun!\r\n \r\nThis one was tricky to debug. Initially, I tought there is a bug in the recently-added (by @lhoestq ) `cast_array_to_feature` function because `git bisect` points to the https://github.com/huggingface/datasets/commit/6ca96c707502e0689f9b58d94f46d871fa5a3c9c commit. Then, I noticed that the feature tpye of the `dialogue` field is `list`, which explains why you didn't get an error in earlier versions. Is there a specific reason why you use `list` instead of `Sequence` in the script? Maybe to avoid turning list of dicts to dicts of lists as it's done by `Sequence` for compatibility with TFDS or for performance reasons? If the field was `Sequence`, you would get an error in `encode_nested_example` because **the scripts yields some additional (nested) columns which are not specified in the `features` dictionary**. Previously, these additional columns would've been ignored by PyArrow (1), but now we have a check for them (2).\r\n(1) See PyArrow behavior:\r\n```\r\n>>> pa.array([{\"a\": 2, \"b\": 3}], type=pa.struct({\"a\": pa.int32()})) # pyarrow ignores the extra column\r\n-- is_valid: all not null\r\n-- child 0 type: int32\r\n [\r\n 2\r\n ]\r\n ```\r\n\r\n(2) Check:\r\nhttps://github.com/huggingface/datasets/blob/4c417d52def6e20359ca16c6723e0a2855e5c3fd/src/datasets/table.py#L1059\r\n\r\nThe fix is very simple: just add the missing columns to the _EMPTY_BELIEF_STATE list:\r\n```python\r\n_EMPTY_BELIEF_STATE.extend(['通用-产品类别', '火车-舱位档次', '通用-系列', '通用-价格区间', '通用-品牌'])\r\n```",
"Hey @mariosasko, thank you so much for figuring this one out - it certainly looks like a tricky bug 😱 ! I don't think there's a specific reason to use `list` instead of `Sequence` with the script, but I'll let the dataset creators know to see if your suggestion is acceptable.\r\n\r\nThank you again!",
"Thanks, this was indeed the fix! Would it make sense to produce a more informative error message in such cases? \r\n\r\nThe issue can be closed. \r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/254/comments | https://api.github.com/repos/huggingface/datasets/issues/254/events | https://github.com/huggingface/datasets/issues/254 | 635,057,568 | MDU6SXNzdWU2MzUwNTc1Njg= | 254 | [Feature request] Be able to remove a specific sample of the dataset | [] | closed | false | null | 1 | 2020-06-09T02:22:13Z | 2020-06-09T08:41:38Z | 2020-06-09T08:41:38Z | null | As mentioned in #117, it's currently not possible to remove a sample of the dataset.
But it is a important use case : After applying some preprocessing, some samples might be empty for example. We should be able to remove these samples from the dataset, or at least mark them as `removed` so when iterating the dataset, we don't iterate these samples.
I think it should be a feature. What do you think ?
---
Any work-around in the meantime ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/254/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/254/timeline | null | completed | null | null | false | [
"Oh yes you can now do that with the `dataset.filter()` method that was added in #214 "
] |
https://api.github.com/repos/huggingface/datasets/issues/3162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3162/comments | https://api.github.com/repos/huggingface/datasets/issues/3162/events | https://github.com/huggingface/datasets/issues/3162 | 1,035,462,136 | I_kwDODunzps49t-X4 | 3,162 | `datasets-cli test` should work with datasets without scripts | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 5 | 2021-10-25T18:52:30Z | 2021-11-25T16:04:29Z | null | null | It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not).
I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/tree/main) -- although @lhoestq came to save the day!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3162/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3162/timeline | null | null | null | null | false | [
"> It would be really useful to be able to run `datasets-cli test`for datasets that don't have scripts attached to them (whether the datasets are private or not).\r\n> \r\n> I wasn't able to run the script for a private test dataset that I had created on the hub (https://huggingface.co/datasets/huggingface/DataMeasurementsTest/tree/main) -- although @lhoestq came to save the day!\r\n\r\nwhy don't you try to share that info with people, so you can also save some days.",
"Hi ! You can run the command if you download the repository\r\n```\r\ngit clone https://huggingface.co/datasets/huggingface/DataMeasurementsTest\r\n```\r\nand run the command\r\n```\r\ndatasets-cli test DataMeasurementsTest/DataMeasurementsTest.py\r\n```\r\n\r\n(though on my side it doesn't manage to download the data since the dataset is private ^^)",
"> Hi ! You can run the command if you download the repository\r\n> \r\n> ```\r\n> git clone https://huggingface.co/datasets/huggingface/DataMeasurementsTest\r\n> ```\r\n> \r\n> and run the command\r\n> \r\n> ```\r\n> datasets-cli test DataMeasurementsTest/DataMeasurementsTest.py\r\n> ```\r\n> \r\n> (though on my side it doesn't manage to download the data since the dataset is private ^^)\r\n\r\nHi! Thanks for the info. \r\ngit cannot find the repository. Do you know if they have depreciated these tests and created a new one?",
"I think it's become private, but feel free to try with any other dataset like `lhoestq/test` for example at `https://huggingface.co/datasets/lhoestq/test`",
"> I think it's become private, but feel free to try with any other dataset like `lhoestq/test` for example at `https://huggingface.co/datasets/lhoestq/test`\r\n\r\nyour example repo and this page `https://huggingface.co/docs/datasets/add_dataset.html` helped me to solve.. thanks a lot"
] |
https://api.github.com/repos/huggingface/datasets/issues/3960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3960/comments | https://api.github.com/repos/huggingface/datasets/issues/3960/events | https://github.com/huggingface/datasets/issues/3960 | 1,173,148,884 | I_kwDODunzps5F7NTU | 3,960 | Load local dataset error | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | 12 | 2022-03-18T03:32:49Z | 2022-03-31T01:59:34Z | null | null | When i used the datasets==1.11.0, it's all right. Util update the latest version, it get the error like this:
```
>>> from datasets import load_dataset
>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']}
>>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification')
[] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder
**config_kwargs,
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__
super().__init__(*args, **kwargs)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__
sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
I need some help to solve the problem, thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3960/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3960/timeline | null | null | null | null | false | [
"Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n```python\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\n```\r\n\r\n\r\nLet us know if that resolves the issue.",
"> Hi! Instead of @nateraw's `image-folder`, I suggest using the newly released `imagefolder` dataset:\r\n> \r\n> ```python\r\n> >>> from datasets import load_dataset\r\n> >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n> >>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\n> ```\r\n> \r\n> Let us know if that resolves the issue.\r\n\r\nSorry, replied late.\r\nThanks a lot! It's worked for me. But it seems much slower than before, and now gets stuck.....\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train/**'], 'validation': ['/ssd/datasets/imagenet/pytorch/val/**']}\r\n>>> ds = load_dataset('imagefolder', data_files=data_files, cache_dir='./', task='image-classification')\r\nResolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1281167/1281167 [00:02<00:00, 437283.97it/s]\r\nResolving data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50001/50001 [00:00<00:00, 89094.29it/s]\r\nUsing custom data configuration default-baebca6347576b33\r\nDownloading and preparing dataset image_folder/default to ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091...\r\nDownloading data files #0: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 82289.56obj/s]\r\nDownloading data files #1: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 73559.11obj/s]\r\nDownloading data files #2: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 81600.46obj/s]\r\nDownloading data files #3: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 79691.56obj/s]\r\nDownloading data files #4: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 82341.37obj/s]\r\nDownloading data files #5: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 75784.46obj/s]\r\nDownloading data files #6: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 81466.18obj/s]\r\nDownloading data files #7: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 82320.27obj/s]\r\nDownloading data files #8: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 78094.00obj/s]\r\nDownloading data files #9: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 84057.59obj/s]\r\nDownloading data files #10: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 83082.31obj/s]\r\nDownloading data files #11: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:01<00:00, 79944.21obj/s]\r\nDownloading data files #12: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 84569.77obj/s]\r\nDownloading data files #13: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 84949.63obj/s]\r\nDownloading data files #14: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80073/80073 [00:00<00:00, 80666.53obj/s]\r\nDownloading data files #15: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 80072/80072 [00:01<00:00, 76723.20obj/s]\r\n^[[Bloading data files #8: 94%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 75061/80073 [00:00<00:00, 82609.89obj/s]\r\nDownloading data files #9: 85%|████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 68120/80073 [00:00<00:00, 83868.54obj/s]\r\nDownloading data files #9: 96%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 76784/80073 [00:00<00:00, 84722.34obj/s]\r\nDownloading data files #10: 75%|███████████████████████████████████████████████████████████████████████████████████████▋ | 59995/80073 [00:00<00:00, 84148.19obj/s]\r\nDownloading data files #10: 97%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 77412/80073 [00:00<00:00, 85724.53obj/s]\r\nDownloading data files #11: 71%|███████████████████████████████████████████████████████████████████████████████████▎ | 57032/80073 [00:00<00:00, 79930.58obj/s]\r\nDownloading data files #11: 92%|███████████████████████████████████████████████████████████████████████████████████████████████████████████ | 73277/80073 [00:00<00:00, 78091.27obj/s]\r\nDownloading data files #12: 86%|█████████████████████████████████████████████████████████████████████████████████████████████████████ | 69125/80073 [00:00<00:00, 84723.02obj/s]\r\nDownloading data files #12: 97%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 77803/80073 [00:00<00:00, 85351.59obj/s]\r\nDownloading data files #13: 75%|████████████████████████████████████████████████████████████████████████████████████████▏ | 60356/80073 [00:00<00:00, 84833.35obj/s]\r\nDownloading data files #13: 97%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 77368/80073 [00:00<00:00, 84475.10obj/s]\r\nDownloading data files #14: 72%|████████████████████████████████████████████████████████████████████████████████████▍ | 57751/80073 [00:00<00:00, 80727.33obj/s]\r\nDownloading data files #14: 92%|████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 74022/80073 [00:00<00:00, 78703.16obj/s]\r\nDownloading data files #15: 78%|███████████████████████████████████████████████████████████████████████████████████████████▋ | 62724/80072 [00:00<00:00, 78387.33obj/s]\r\nDownloading data files #15: 99%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 78933/80072 [00:01<00:00, 79353.63obj/s]\r\n```",
"Wait a long time, it completed. I don't know why it's so slow...",
"You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.",
"> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.\r\n\r\nThanks!It's worked well.",
"> You can pass `ignore_verifications=True` in `load_dataset` to make it fast (to skip checksum verification). I'll add this tip to the docs.\r\n\r\nI find current `load_dataset` loads ImageNet still slowly, even add `ignore_verifications=True`.\r\nFirst loading, it costs about 20 min in my servers.\r\n```\r\nreal\t19m23.023s\r\nuser\t21m18.360s\r\nsys\t7m59.080s\r\n```\r\n\r\nSecond reusing, it costs about 15 min in my servers.\r\n```\r\nreal\t15m20.735s\r\nuser\t12m22.979s\r\nsys\t5m46.960s\r\n```\r\n\r\nI think it's too much slow, is there other method to make it faster?",
"And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n```python\r\ndef collate_fn(examples):\r\n pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n labels = torch.tensor([example[\"labels\"] for example in examples])\r\n return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n```\r\nHow to know the keys of example?",
"Loading the image files slowly, is it because the multiple processes load files at the same time?",
"Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs. \r\n\r\n> And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n> \r\n> ```python\r\n> def collate_fn(examples):\r\n> pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n> labels = torch.tensor([example[\"labels\"] for example in examples])\r\n> return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n> ```\r\n> \r\n> How to know the keys of example?\r\n\r\nWhat do you mean by \"could you make some changes\".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.\r\n\r\n",
"> Could you please share the output you get after the second loading? Also, feel free to interrupt (`KeyboardInterrupt`) the process while waiting for it to end and share a traceback to show us where the process hangs.\r\n> \r\n> > And in transformers the [ViT example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py), could you make some changes ? Like the `collect_fn`\r\n> > ```python\r\n> > def collate_fn(examples):\r\n> > pixel_values = torch.stack([example[\"pixel_values\"] for example in examples])\r\n> > labels = torch.tensor([example[\"labels\"] for example in examples])\r\n> > return {\"pixel_values\": pixel_values, \"labels\": labels}\r\n> > ```\r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > \r\n> > How to know the keys of example?\r\n> \r\n> What do you mean by \"could you make some changes\".The `ViT` script doesn't remove unused columns by default, so the keys of an example are equal to the columns of the given dataset.\r\n\r\nThanks for your reply!\r\n\r\n1. I did not record the second output, so I run it again. \r\n```\r\n(merak) txacs@master:/dat/txacs/test$ time python test.py \r\nResolving data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1281167/1281167 [00:02<00:00, 469497.89it/s]\r\nResolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50001/50001 [00:00<00:00, 70123.73it/s]\r\nUsing custom data configuration default-baebca6347576b33\r\nReusing dataset image_folder (./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091)\r\n100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:10<00:00, 5.37s/it]\r\nLoading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-cd3fbdc025e03f8c.arrow\r\nLoading cached processed dataset at ./image_folder/default-baebca6347576b33/0.0.0/ee92df8e96c6907f3c851a987be3fd03d4b93b247e727b69a8e23ac94392a091/cache-b5a9de701bbdbb2b.arrow\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['image', 'labels'],\r\n num_rows: 1281167\r\n })\r\n validation: Dataset({\r\n features: ['image', 'labels'],\r\n num_rows: 50000\r\n })\r\n})\r\n\r\nreal\t10m10.413s\r\nuser\t9m33.195s\r\nsys\t2m47.528s\r\n```\r\nAlthough it cost less time than the last, but still slowly.\r\n\r\n2. Sorry, forgive my poor statement. I solved it, updating to new script 'run_image_classification.py'.",
"Thanks for rerunning the code to record the output. Is it the `\"Resolving data files\"` part on your machine that takes a long time to complete, or is it `\"Loading cached processed dataset at ...\"˙`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.",
"> Thanks for rerunning the code to record the output. Is it the `\"Resolving data files\"` part on your machine that takes a long time to complete, or is it `\"Loading cached processed dataset at ...\"˙`? We plan to speed up the latter by splitting bigger Arrow files into smaller ones, but your dataset doesn't seem that big, so not sure if that's the issue.\r\n\r\nSounds good! The main position, which costs long time, is from program starting to `\"Resolving data files\"`. I hope you can solve it early, thanks!",
"I'm getting this problem. Script has been stuck at this part for the past 15 or so minutes:\r\n \r\n`Resolving data files: 100%|█████████████████████████████████████████| 107/107 [00:00<00:00, 472.74it/s]`\r\n\r\nI had everything working fine on an AWS EC2 node with a single GPU. Then I created an image based on the single GPU machine, and spun up a new one with 4 GPUs, so I got all of the training data ready at .cache. \r\n\r\nTurned off all checks with `verification_mode='no_checks'`. Logged in with huggingface-cli again just to be sure.\r\n\r\nInterrupting shows the code is stuck here:\r\n\r\n```\r\nFile \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py\", line 200, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py\", line 336, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/arrow_reader.py\", line 357, in read_table\r\n return table_cls.from_file(filename)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/table.py\", line 1059, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"/opt/conda/envs/pytorch/lib/python3.10/site-packages/datasets/table.py\", line 66, in _memory_mapped_arrow_table_from_file\r\n pa_table = opened_stream.read_all()\r\n```\r\n\r\nIs it just going to take a while or am I going to run out of money? :sweat_smile: \r\n\r\nedit: ping @mariosasko "
] |
https://api.github.com/repos/huggingface/datasets/issues/4685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4685/comments | https://api.github.com/repos/huggingface/datasets/issues/4685/events | https://github.com/huggingface/datasets/pull/4685 | 1,305,861,708 | PR_kwDODunzps47dju8 | 4,685 | Fix mock fsspec | [] | closed | false | null | 1 | 2022-07-15T10:23:12Z | 2022-07-15T13:05:03Z | 2022-07-15T12:52:40Z | null | This PR:
- Removes an unused method from `DummyTestFS`
- Refactors `mock_fsspec` to make it simpler | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4685/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4685/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4685.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4685",
"merged_at": "2022-07-15T12:52:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4685.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4685"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3429/comments | https://api.github.com/repos/huggingface/datasets/issues/3429/events | https://github.com/huggingface/datasets/pull/3429 | 1,078,902,390 | PR_kwDODunzps4vx1gp | 3,429 | Make cast cacheable (again) on Windows | [] | closed | false | null | 0 | 2021-12-13T19:32:02Z | 2021-12-14T14:39:51Z | 2021-12-14T14:39:50Z | null | `cast` currently emits the following warning when called on Windows:
```
Parameter 'function'=<function Dataset.cast.<locals>.<lambda> at 0x000001C930571EA0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting
and caching to work. If you reuse this transform, the caching mechanism will consider it to be different
from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
```
It seems like the issue stems from the `config.PYARROW_VERSION` object not being serializable on Windows (tested with `dumps(lambda: config.PYARROW_VERSION)`), so I'm fixing this by capturing `config.PYARROW_VERSION.major` before the lambda definition. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3429/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3429/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3429.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3429",
"merged_at": "2021-12-14T14:39:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3429.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3429"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/584/comments | https://api.github.com/repos/huggingface/datasets/issues/584/events | https://github.com/huggingface/datasets/pull/584 | 695,186,652 | MDExOlB1bGxSZXF1ZXN0NDgxNDY0NjEz | 584 | Use github versioning | [] | closed | false | null | 1 | 2020-09-07T14:58:15Z | 2020-09-09T13:37:35Z | 2020-09-09T13:37:34Z | null | Right now dataset scripts and metrics are downloaded from S3 which is in sync with master. It means that it's not currently possible to pin the dataset/metric script version.
To fix that I changed the download url from S3 to github, and adding a `version` parameter in `load_dataset` and `load_metric` to pin a certain version of the lib, as in #562 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/584/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/584/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/584.diff",
"html_url": "https://github.com/huggingface/datasets/pull/584",
"merged_at": "2020-09-09T13:37:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/584.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/584"
} | true | [
"I noticed that datasets like `cnn_dailymail` need the `version` parameter to be passed to its `config_kwargs`.\r\nShall we rename the `version` paramater in `load_dataset` ? Maybe `repo_version` or `script_version` ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/5095 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5095/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5095/comments | https://api.github.com/repos/huggingface/datasets/issues/5095/events | https://github.com/huggingface/datasets/pull/5095 | 1,403,221,408 | PR_kwDODunzps5Afzsq | 5,095 | Fix tutorial (#5093) | [] | closed | false | null | 2 | 2022-10-10T13:55:15Z | 2022-10-10T17:50:52Z | 2022-10-10T15:32:20Z | null | Close #5093 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5095/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5095/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5095.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5095",
"merged_at": "2022-10-10T15:32:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5095.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5095"
} | true | [
"Oops I merged without linking to the hacktoberfest issue - not sure if it counts in this case\r\n\r\nsorry about that..\r\n\r\nNext time you can just mention \"Close #XXXX\" in your issue to link it",
"It should :) (the `hacktoberfest` repo topic is all that matters)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2671 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2671/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2671/comments | https://api.github.com/repos/huggingface/datasets/issues/2671/events | https://github.com/huggingface/datasets/pull/2671 | 947,273,875 | MDExOlB1bGxSZXF1ZXN0NjkyMjc5MTM0 | 2,671 | Mesinesp development and training data sets have been added. | [] | closed | false | null | 1 | 2021-07-19T05:14:38Z | 2021-07-19T07:32:28Z | 2021-07-19T06:45:50Z | null | https://zenodo.org/search?page=1&size=20&q=mesinesp, Mesinesp has Medical Semantic Indexed records in Spanish. Indexing is done using DeCS codes, a sort of Spanish equivalent to MeSH terms.
The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) development set has a total of 750 records.
The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) training set has a total of 369,368 records.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2671/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2671/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2671.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2671",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2671.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2671"
} | true | [
"It'll be new pull request with new commits."
] |
https://api.github.com/repos/huggingface/datasets/issues/5346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5346/comments | https://api.github.com/repos/huggingface/datasets/issues/5346/events | https://github.com/huggingface/datasets/issues/5346 | 1,486,884,983 | I_kwDODunzps5YoBB3 | 5,346 | [Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem! | [] | closed | false | null | 3 | 2022-12-09T14:48:02Z | 2023-06-02T20:24:44Z | 2023-01-25T19:35:40Z | null | Thanks to all of you, Datasets is just about to pass 15k stars!
Since the last survey, a lot has happened: the [diffusers](https://github.com/huggingface/diffusers), [evaluate](https://github.com/huggingface/evaluate) and [skops](https://github.com/skops-dev/skops) libraries were born. `timm` joined the Hugging Face ecosystem. There were 25 new releases of `transformers`, 21 new releases of `datasets`, 13 new releases of `accelerate`.
If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts:
[**hf.co/oss-survey**](https://docs.google.com/forms/d/e/1FAIpQLSf4xFQKtpjr6I_l7OfNofqiR8s-WG6tcNbkchDJJf5gYD72zQ/viewform?usp=sf_link)
(please reply in the above feedback form rather than to this thread)
Thank you all on behalf of the HuggingFace team! 🤗 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5346/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5346/timeline | null | completed | null | null | false | [
"As the survey is finished, can we close this issue, @LysandreJik ?",
"Yes! I'll post a public summary on the forums shortly.",
"Is the summary available? I would be interested in reading your findings."
] |
https://api.github.com/repos/huggingface/datasets/issues/3263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3263/comments | https://api.github.com/repos/huggingface/datasets/issues/3263/events | https://github.com/huggingface/datasets/issues/3263 | 1,052,552,516 | I_kwDODunzps4-vK1E | 3,263 | FET DATA | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 0 | 2021-11-13T05:46:06Z | 2021-11-13T13:31:47Z | 2021-11-13T13:31:47Z | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3263/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3263/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2848/comments | https://api.github.com/repos/huggingface/datasets/issues/2848/events | https://github.com/huggingface/datasets/pull/2848 | 981,953,908 | MDExOlB1bGxSZXF1ZXN0NzIxODYyMDQx | 2,848 | Update README.md | [] | closed | false | null | 1 | 2021-08-28T23:58:26Z | 2021-09-07T09:40:32Z | 2021-09-07T09:40:32Z | null | Changed 'Tain' to 'Train'. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2848/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2848/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2848.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2848",
"merged_at": "2021-09-07T09:40:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2848.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2848"
} | true | [
"Merging since the CI error is unrelated to this PR and fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/3521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3521/comments | https://api.github.com/repos/huggingface/datasets/issues/3521/events | https://github.com/huggingface/datasets/pull/3521 | 1,093,797,947 | PR_kwDODunzps4wiFCs | 3,521 | Vivos license update | [] | closed | false | null | 0 | 2022-01-04T22:17:47Z | 2022-01-04T22:18:16Z | 2022-01-04T22:18:16Z | null | Updated the license information with the link to the license text | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3521/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3521/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3521.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3521",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3521.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3521"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4909/comments | https://api.github.com/repos/huggingface/datasets/issues/4909/events | https://github.com/huggingface/datasets/pull/4909 | 1,353,997,788 | PR_kwDODunzps499Fhe | 4,909 | Update GLUE evaluation metadata | [] | closed | false | null | 1 | 2022-08-29T09:43:44Z | 2022-08-29T14:53:29Z | 2022-08-29T14:51:18Z | null | This PR updates the evaluation metadata for GLUE to:
* Include defaults for all configs except `ax` (which only has a `test` split with no known labels)
* Fix the default split from `test` to `validation` since `test` splits in GLUE have no labels (they're private)
* Fix the `task_id` for some existing defaults
cc @sashavor @douwekiela | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4909/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4909/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4909.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4909",
"merged_at": "2022-08-29T14:51:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4909.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4909"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2735/comments | https://api.github.com/repos/huggingface/datasets/issues/2735/events | https://github.com/huggingface/datasets/issues/2735 | 956,889,365 | MDU6SXNzdWU5NTY4ODkzNjU= | 2,735 | Add Open Buildings dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 0 | 2021-07-30T16:08:39Z | 2021-07-31T05:01:25Z | null | null | ## Adding a Dataset
- **Name:** Open Buildings
- **Description:** A dataset of building footprints to support social good applications.
Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science. This large-scale open dataset contains the outlines of buildings derived from high-resolution satellite imagery in order to support these types of uses. The project being based in Ghana, the current focus is on the continent of Africa.
See: "Mapping Africa's Buildings with Satellite Imagery" https://ai.googleblog.com/2021/07/mapping-africas-buildings-with.html
- **Paper:** https://arxiv.org/abs/2107.12283
- **Data:** https://sites.research.google/open-buildings/
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Reported by: @osanseviero | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2735/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2735/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3158/comments | https://api.github.com/repos/huggingface/datasets/issues/3158/events | https://github.com/huggingface/datasets/pull/3158 | 1,035,158,070 | PR_kwDODunzps4toGpe | 3,158 | Fix string encoding for Value type | [] | closed | false | null | 1 | 2021-10-25T13:44:13Z | 2021-10-25T14:12:06Z | 2021-10-25T14:12:05Z | null | Some metrics have `string` features but currently it fails if users pass integers instead. Indeed feature encoding that handles the conversion of the user's objects to the right python type is missing a case for `string`, while it already works as expected for integers, floats and booleans
Here is an example code that didn't work previously, but that works with this fix:
```python
import datasets
# Note that 'id' is an integer while the SQuAD metric uses strings
predictions = [{'prediction_text': '1976', 'id': 5}]
references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': 5}]
squad_metric = datasets.load_metric("squad")
squad_metric.add_batch(predictions=predictions, references=references)
results = squad_metric.compute()
# {'exact_match': 100.0, 'f1': 100.0}
```
cc @sgugger @philschmid | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3158/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3158/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3158.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3158",
"merged_at": "2021-10-25T14:12:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3158.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3158"
} | true | [
"That was fast! \r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2042 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2042/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2042/comments | https://api.github.com/repos/huggingface/datasets/issues/2042/events | https://github.com/huggingface/datasets/pull/2042 | 830,190,276 | MDExOlB1bGxSZXF1ZXN0NTkxNzQwNzQ3 | 2,042 | Fix arrow memory checks issue in tests | [] | closed | false | null | 0 | 2021-03-12T14:49:52Z | 2021-03-12T15:04:23Z | 2021-03-12T15:04:22Z | null | The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory.
From my experiments, the tests fail only when the full test suite is ran.
This made me think that maybe some arrow objects from other tests were not freeing their memory until they do and cause the memory verifications to fail in other tests.
Collecting the garbage collector before checking the arrow memory usage seems to fix this issue.
I added a context manager `assert_arrow_memory_increases` that we can use in tests and that deals with the gc. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2042/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2042/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2042.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2042",
"merged_at": "2021-03-12T15:04:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2042.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2042"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4561/comments | https://api.github.com/repos/huggingface/datasets/issues/4561/events | https://github.com/huggingface/datasets/pull/4561 | 1,283,624,242 | PR_kwDODunzps46TnVe | 4,561 | Add evaluation data to acronym_identification | [] | closed | false | null | 1 | 2022-06-24T11:17:33Z | 2022-06-27T09:37:55Z | 2022-06-27T08:49:22Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4561/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4561/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4561.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4561",
"merged_at": "2022-06-27T08:49:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4561.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4561"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2181/comments | https://api.github.com/repos/huggingface/datasets/issues/2181/events | https://github.com/huggingface/datasets/issues/2181 | 852,261,607 | MDU6SXNzdWU4NTIyNjE2MDc= | 2,181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | [] | closed | false | null | 9 | 2021-04-07T10:26:46Z | 2021-04-12T07:15:55Z | 2021-04-12T07:15:55Z | null | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 531, in incomplete_dir
yield tmp_dir
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 650, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 1027, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
File "/app/.cache/huggingface/modules/datasets_modules/datasets/json/9498524fd296a6cca99c66d6c5be507d1c0991f5a814e535b507f4a66096a641/json.py", line 83, in _generate_tables
parse_options=self.config.pa_parse_options,
File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
When using only a small portion of the sample file, say first 100 lines, it works perfectly well..
I see that it is the error from pyarrow, but could you give me a hint or possible solutions?
#369 describes the same error and #372 claims to have fixed the issue, but I have no clue why I am still getting this one. Thanks in advance! | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2181/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2181/timeline | null | completed | null | null | false | [
"Hi ! Can you try to increase the block size ? For example\r\n```python\r\nblock_size_10MB = 10<<20\r\nload_dataset(\"json\", ..., block_size=block_size_10MB)\r\n```\r\nThe block size corresponds to how much bytes to process at a time from the input stream.\r\nThis will determine multi-threading granularity as well as the size of individual chunks in the dataset.\r\n\r\nYou can also try with bigger block sizes if needed",
"Hi @lhoestq! Thank you for your prompt reply.\r\nI have experimented with (10<<20, 10<<28, 10<<30, 10<<33, 10<<34), since my machine has 192G of memory, but it's either the above-mentioned error or processed killed because of OOM.\r\n\r\nCould you give me a bit of background on why block size needs to be exactly calibrated?\r\nTo my understanding, small block sized should run just fine despite its slowness..\r\n\r\n\r\n",
"We're using the JSON loader of pyarrow. It parses the file chunk by chunk to load the dataset.\r\nThis issue happens when there's no delimiter in one chunk of data. For json line, the delimiter is the end of line.\r\nSo with a big value for chunk_size this should have worked unless you have one extremely long line in your file.\r\n\r\nAlso what version of pyarrow are you using ?\r\n\r\nFInally I wonder if it could be an issue on pyarrow's side when using big json files. (I haven't tested big json files like yours)",
"I'm using `pyarrow==3.0.0` with `datasets==1.5.0`.\r\n\r\nYour point totally makes sense. I will check if my jsonl file contains an extremely long file and let you know. \r\n\r\nHere are some different error messages that I got when tweaking `block_size`. I also suspect that this is related to the pyarrow... but I guess it would be wonderful if datasesets could give a clear guide on how to play with large datasets! (I am suddenly experiencing various issue when working with large datasets.. e.g. #1992 )\r\n```python\r\n return paj.ReadOptions(use_threads=self.use_threads, block_size=self.block_size)\r\n File \"pyarrow/_json.pyx\", line 56, in pyarrow._json.ReadOptions.__init__\r\n File \"pyarrow/_json.pyx\", line 81, in pyarrow._json.ReadOptions.block_size.__set__\r\nOverflowError: value too large to convert to int32_t\r\n```\r\n\r\n```python\r\n\r\nline 83, in _generate_tables\r\n parse_options=self.config.pa_parse_options,\r\n File \"pyarrow/_json.pyx\", line 247, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Exceeded maximum rows\r\n```",
"I am getting the same error. When I tweak the block_size, I also find:\r\n`OverflowError: value too large to convert to int32_t`\r\nand \r\n`pyarrow.lib.ArrowInvalid: Exceeded maximum rows`\r\n",
"I made more tests. I used a smaller dataset and I was getting the same error, which means that it was not necessarily linked to the dataset size. To make both my smaller and larger datasets work, I got rid of lists with the json file. I had the following data format:\r\n```python\r\n[\r\n {'key': \"a\", 'value': ['one', 'two', 'three']},\r\n {'key': \"b\", 'value': ['four', 'five', 'six']}\r\n]\r\n```\r\nI changed to:\r\n\r\n```python\r\n {'key': \"a\", 'value': 'one\\ntwo\\nthree'},\r\n {'key': \"b\", 'value': 'four\\nfive\\nsix']}\r\n```\r\nand that worked!\r\n\r\nI used the following to reformat my json file:\r\n```python\r\nwith open(file_name, \"w\", encoding=\"utf-8\") as f:\r\n for item in list_:\r\n f.write(json.dumps(item) + \"\\n\")\r\n```\r\nThis works with `block_size_10MB = 10 << 20` or without specifying `block_size`.",
"Thanks @hwijeen for reporting and thanks @jpilaul for pointing this out.\r\n\r\nIndeed, those are different JSON-like formats:\r\n- the first one is the **standard JSON** format: all the file content is JSON-valid, thus all content is either a JSON object (between curly brackets `{...}`) or a JSON array (between square brackets `[...]`)\r\n- the second one is called **JSON Lines**: the entire file content is not JSON-valid, but only every line (newline-delimited) is JSON-valid\r\n\r\nCurrently PyArrow only supports **JSON Lines** format: \r\n- https://arrow.apache.org/docs/python/generated/pyarrow.json.read_json.html\r\n > Currently only the line-delimited JSON format is supported.\r\n- https://arrow.apache.org/docs/python/json.html\r\n > Arrow supports reading columnar data from line-delimited JSON files.",
"Thanks @albertvillanova for your explanation, it is helpful to know (maybe add to docs?)!\r\nHowever, the problem I described above happened when I was dealing with jsonl files 😿\r\nAlthough I did not thoroughly inspect, I suspect the cause was the one extremely long document in my case.",
"I see... I guess there is another problem going one then, related to the size."
] |
https://api.github.com/repos/huggingface/datasets/issues/2403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2403/comments | https://api.github.com/repos/huggingface/datasets/issues/2403/events | https://github.com/huggingface/datasets/pull/2403 | 900,059,014 | MDExOlB1bGxSZXF1ZXN0NjUxNjcxMTMw | 2,403 | Free datasets with cache file in temp dir on exit | [] | closed | false | null | 0 | 2021-05-24T22:15:11Z | 2021-05-26T17:25:19Z | 2021-05-26T16:39:29Z | null | This PR properly cleans up the memory-mapped tables that reference the cache files inside the temp dir.
Since the built-in `_finalizer` of `TemporaryDirectory` can't be modified, this PR defines its own `TemporaryDirectory` class that accepts a custom clean-up function.
Fixes #2402 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2403/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2403/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2403.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2403",
"merged_at": "2021-05-26T16:39:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2403.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2403"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4848/comments | https://api.github.com/repos/huggingface/datasets/issues/4848/events | https://github.com/huggingface/datasets/pull/4848 | 1,338,271,833 | PR_kwDODunzps49JNj_ | 4,848 | a | [] | closed | false | null | 0 | 2022-08-14T15:01:16Z | 2022-08-14T15:09:59Z | 2022-08-14T15:09:59Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4848/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4848/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4848.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4848",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4848.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4848"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5810 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5810/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5810/comments | https://api.github.com/repos/huggingface/datasets/issues/5810/events | https://github.com/huggingface/datasets/pull/5810 | 1,689,917,822 | PR_kwDODunzps5PdJHI | 5,810 | Add `fn_kwargs` to `map` and `filter` of `IterableDataset` and `IterableDatasetDict` | [] | closed | false | null | 9 | 2023-04-30T13:23:01Z | 2023-05-22T08:12:39Z | 2023-05-22T08:05:31Z | null | # Overview
I've added an argument`fn_kwargs` for map and filter methods of `IterableDataset` and `IterableDatasetDict` classes.
# Details
Currently, the map and filter methods of some classes related to `IterableDataset` do not allow specifing the arguments passed to the function. This pull request adds `fn_kwargs` to pass arguments to the mapping function. This allows users to preprocess data more flexibly.
Added `fn_kwargs` to the following classes and methods (description of the argument is also added).
1. class `FilteredExamplesIterable`
2. method `filter` of class `IterableDataset`
3. method `map` of class `IterableDatasetDict`
4. method `filter` of class `IterableDatasetDict`
# Example of changes
Here's an example of how to use the new functionality:
```python
from datasets import IterableDatasetDict
def preprocess_function(example, a=None, b=None):
# do something
return example
dataset = IterableDatasetDict(...)
dataset = dataset.map(preprocess_function, fn_kwargs={"a": 1, "b": 2})
```
# Related Issues
This pull request is related to the following issue:
https://github.com/huggingface/datasets/issues/3444 .
# Testing
I have added unit tests to test the new functionality.
In test_iterable_dataset.py
- Added `test_filtered_examples_iterable_with_fn_kwargs` for [1](#details).
- Added `test_iterable_dataset_filter` for [2](#details).
- Added `test_iterable_dataset_map_with_fn_kwargs`. This is not a newly added feature, but was added because it was not tested.
In test_dataset_dict.py
- Added `_create_dummy_iterable_dataset` for [3](#details) and [4](#details).
- Added `_create_dummy_iterable_dataset_dict` for [3](#details) and [4](#details).
- Added `test_iterable_map` for [3](#details).
- Added `test_iterable_filter` for [4](#details).
Note that, there is no test for `IterableDatasetDict` at the current main branch. I thought about writing tests for `IterableDatasetDict` in a new file, but I decided to add them in the test file for `DatasetDict` (test_dataset_dict.py).
# Checklist
- [x] Format the code.
- [x] Added tests.
- [x] Passed tests locally. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5810/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5810/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5810.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5810",
"merged_at": "2023-05-22T08:05:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5810.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5810"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry, the local test passed because it was inadvertently testing the main branch. I am currently fixing where the test failed.",
"- I have fixed the bug and addressed the above two points.\r\n- I have tested locally and confirmed that the test passes.\r\n\r\nPlease check the contents. @lhoestq \r\n\r\n5715a7e64bdd2951e6705aee58d592392e1538d6",
"Cool ! You can run `make style` to fix code formatting to fix the ci",
"I had forgotten about it. I did it. @lhoestq \r\n00248926a37c6f1387614aa388c36fdc105a59f5",
"Thanks for putting this together @yuukicammy ! Looking forward to using this new addition ASAP. \r\n@lhoestq - sorry to bother you with this, but if this looks good to you, any chance we could get this merged in? \r\n\r\nThanks again to you both! ",
"Yup there's just one test to remove and we can merge",
"Sorry for my understanding wrong! Correspondence has been addressed. @lhoestq \r\n ca511b7b29fdde51ffd69b58bda79220472e9e94\r\n\r\nThanks for your comment! @brianhill11 ",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006788 / 0.011353 (-0.004564) | 0.004372 / 0.011008 (-0.006636) | 0.097746 / 0.038508 (0.059238) | 0.034858 / 0.023109 (0.011749) | 0.298122 / 0.275898 (0.022224) | 0.335272 / 0.323480 (0.011792) | 0.005810 / 0.007986 (-0.002175) | 0.004944 / 0.004328 (0.000616) | 0.072352 / 0.004250 (0.068101) | 0.041730 / 0.037052 (0.004678) | 0.316482 / 0.258489 (0.057992) | 0.338710 / 0.293841 (0.044869) | 0.027975 / 0.128546 (-0.100571) | 0.008746 / 0.075646 (-0.066901) | 0.329336 / 0.419271 (-0.089935) | 0.051327 / 0.043533 (0.007794) | 0.300695 / 0.255139 (0.045556) | 0.322813 / 0.283200 (0.039613) | 0.101133 / 0.141683 (-0.040550) | 1.422767 / 1.452155 (-0.029388) | 1.538364 / 1.492716 (0.045648) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.016698 / 0.018006 (-0.001308) | 0.447042 / 0.000490 (0.446552) | 0.007609 / 0.000200 (0.007409) | 0.000277 / 0.000054 (0.000223) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026732 / 0.037411 (-0.010679) | 0.108295 / 0.014526 (0.093769) | 0.116905 / 0.176557 (-0.059652) | 0.173166 / 0.737135 (-0.563969) | 0.122560 / 0.296338 (-0.173779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394893 / 0.215209 (0.179683) | 3.950314 / 2.077655 (1.872659) | 1.780576 / 1.504120 (0.276456) | 1.579855 / 1.541195 (0.038660) | 1.711197 / 1.468490 (0.242707) | 0.521469 / 4.584777 (-4.063308) | 3.838850 / 3.745712 (0.093138) | 3.101095 / 5.269862 (-2.168767) | 1.531574 / 4.565676 (-3.034102) | 0.065291 / 0.424275 (-0.358984) | 0.011979 / 0.007607 (0.004372) | 0.496543 / 0.226044 (0.270498) | 4.965446 / 2.268929 (2.696517) | 2.250788 / 55.444624 (-53.193837) | 1.923231 / 6.876477 (-4.953245) | 2.075372 / 2.142072 (-0.066700) | 0.638708 / 4.805227 (-4.166519) | 0.142048 / 6.500664 (-6.358616) | 0.064225 / 0.075469 (-0.011244) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211799 / 1.841788 (-0.629989) | 14.791822 / 8.074308 (6.717514) | 14.274993 / 10.191392 (4.083601) | 0.163942 / 0.680424 (-0.516482) | 0.017541 / 0.534201 (-0.516660) | 0.396440 / 0.579283 (-0.182843) | 0.427502 / 0.434364 (-0.006861) | 0.494273 / 0.540337 (-0.046064) | 0.586877 / 1.386936 (-0.800059) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006846 / 0.011353 (-0.004506) | 0.004854 / 0.011008 (-0.006154) | 0.075654 / 0.038508 (0.037146) | 0.034295 / 0.023109 (0.011186) | 0.378095 / 0.275898 (0.102197) | 0.407833 / 0.323480 (0.084353) | 0.006155 / 0.007986 (-0.001830) | 0.004259 / 0.004328 (-0.000070) | 0.076195 / 0.004250 (0.071944) | 0.051901 / 0.037052 (0.014849) | 0.375027 / 0.258489 (0.116538) | 0.428189 / 0.293841 (0.134348) | 0.028814 / 0.128546 (-0.099733) | 0.009209 / 0.075646 (-0.066438) | 0.083681 / 0.419271 (-0.335591) | 0.049158 / 0.043533 (0.005625) | 0.366669 / 0.255139 (0.111530) | 0.388767 / 0.283200 (0.105568) | 0.107837 / 0.141683 (-0.033845) | 1.476354 / 1.452155 (0.024199) | 1.580160 / 1.492716 (0.087443) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218900 / 0.018006 (0.200894) | 0.445475 / 0.000490 (0.444985) | 0.000423 / 0.000200 (0.000223) | 0.000063 / 0.000054 (0.000008) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029740 / 0.037411 (-0.007671) | 0.115192 / 0.014526 (0.100666) | 0.122439 / 0.176557 (-0.054118) | 0.170639 / 0.737135 (-0.566496) | 0.128085 / 0.296338 (-0.168254) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437745 / 0.215209 (0.222536) | 4.385695 / 2.077655 (2.308040) | 2.189893 / 1.504120 (0.685773) | 2.023160 / 1.541195 (0.481965) | 2.112798 / 1.468490 (0.644308) | 0.522497 / 4.584777 (-4.062280) | 3.881356 / 3.745712 (0.135644) | 3.206090 / 5.269862 (-2.063772) | 1.308241 / 4.565676 (-3.257435) | 0.065635 / 0.424275 (-0.358640) | 0.012288 / 0.007607 (0.004681) | 0.537265 / 0.226044 (0.311220) | 5.361641 / 2.268929 (3.092712) | 2.638941 / 55.444624 (-52.805684) | 2.344717 / 6.876477 (-4.531759) | 2.437619 / 2.142072 (0.295546) | 0.645079 / 4.805227 (-4.160149) | 0.143852 / 6.500664 (-6.356812) | 0.065796 / 0.075469 (-0.009673) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276588 / 1.841788 (-0.565200) | 15.239396 / 8.074308 (7.165088) | 13.150591 / 10.191392 (2.959199) | 0.163635 / 0.680424 (-0.516789) | 0.017533 / 0.534201 (-0.516668) | 0.397659 / 0.579283 (-0.181624) | 0.425589 / 0.434364 (-0.008774) | 0.466570 / 0.540337 (-0.073768) | 0.563953 / 1.386936 (-0.822983) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/55 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/55/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/55/comments | https://api.github.com/repos/huggingface/datasets/issues/55/events | https://github.com/huggingface/datasets/pull/55 | 613,968,072 | MDExOlB1bGxSZXF1ZXN0NDE0NjE0MjE1 | 55 | Beam datasets | [] | closed | false | null | 4 | 2020-05-07T11:04:32Z | 2020-05-11T07:20:02Z | 2020-05-11T07:20:00Z | null | # Beam datasets
## Intro
Beam Datasets are using beam pipelines for preprocessing (basically lots of `.map` over objects called PCollections).
The advantage of apache beam is that you can choose which type of runner you want to use to preprocess your data. The main runners are:
- the `DirectRunner` to run the pipeline locally (default). However I encountered memory issues for big datasets (like the french or english wikipedia). Small dataset work fine
- Google Dataflow. I didn't play with it.
- Spark or Flink, two well known data processing frameworks. I tried to use the Spark/Flink local runners provided by apache beam for python and wasn't able to make them work properly though...
## From tfds beam datasets to our own beam datasets
Tensorflow datasets used beam and a complicated pipeline to shard the TFRecords files.
To allow users to download beam datasets and not having to preprocess them, they also allow to download the already preprocessed datasets from their google storage (the beam pipeline doesn't run in that case).
On our side, we replace TFRecords by something else. Arrow or Parquet do the job but I chose Parquet as: 1) there is a builtin apache beam parquet writer that is quite convenient, and 2) reading parquet from the pyarrow library is also simple and effective (there is a mmap option !)
Moreover we don't shard datasets in many many files like tfds (they were doing probably doing that mainly because of the limit of 2Gb per TFRecord file). Therefore we have a simpler pipeline that saves each split into one parquet file. We also removed the utilities to use their google storage (for now maybe ? we'll have to discuss it).
## Main changes
- Added a BeamWriter to save the output of beam pipelines into parquet files and fill dataset infos
- Create a ParquetReader and refactor a bit the arrow_reader.py
\> **With this, we can now try to add beam datasets from tfds**
I already added the wikipedia one, and I will also try to add the Wiki40b dataset
## Test the wikipedia script
You can download and run the beam pipeline for wikipedia (using the `DirectRunner` by default) like this:
```
>>> import nlp
>>> nlp.load("datasets/nlp/wikipedia", dataset_config="20200501.frr")
```
This wikipedia dataset (lang: frr, North Frisian) is a small one (~10Mb), but feel free to try bigger ones (and fill 20Gb of swap memory if you try the english one lol)
## Next
Should we allow to download preprocessed datasets from the tfds google storage ?
Should we try to optimize the beam pipelines to run locally without memory issues ?
Should we try other data processing frameworks for big datasets, like spark ?
## About this PR
It should be merged after #25
-----------------
I'd be happy to have your feedback and your ideas to improve the processing of big datasets like wikipedia :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/55/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/55/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/55.diff",
"html_url": "https://github.com/huggingface/datasets/pull/55",
"merged_at": "2020-05-11T07:20:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/55.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/55"
} | true | [
"Right now the changes are a bit hard to read as the one from #25 are also included. You can wait until #25 is merged before looking at the implementation details",
"Nice!! I tested it a bit and works quite well. I will do a my review once the #25 will be merged because there are several overlaps.\r\n\r\nAt least I can share my thoughts on your **Next** section:\r\n1) I don't think it is a good thing to rely on tfds preprocessed datasets uploaded in their online storage, because they might be updated or deleted at any moment by Google and then possibly break our own processing.\r\n2) Improves the pipeline is always a good direction, but in the meantime we might also share the preprocessed dataset in S3 storage. Which might be another way to see 1), instead of downloading Google preprocessed datasets, using our own ones.\r\n3) Apache Beam can be easily integrated in Spark, so I don't see the need to replace Beam by Spark.",
"Ok I've merged #25 so you can rebase or merge if you want.\r\n\r\nI fully agree with @jplu notes for the \"next section\".\r\n\r\nDon't hesitate to use some credit on Google Dataflow if you think it would be useful to give it a try.",
"Pr is ready for review !\r\n\r\nNew minor changes:\r\n- re-added the csv dataset builder (it was on my branch from #25 but disappeared from master)\r\n- move the csv script and the wikipedia script to \"under construction\" for now\r\n- some renaming in the `nlp-cli test` command"
] |
https://api.github.com/repos/huggingface/datasets/issues/1762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1762/comments | https://api.github.com/repos/huggingface/datasets/issues/1762/events | https://github.com/huggingface/datasets/issues/1762 | 791,226,007 | MDU6SXNzdWU3OTEyMjYwMDc= | 1,762 | Unable to format dataset to CUDA Tensors | [] | closed | false | null | 6 | 2021-01-21T15:31:23Z | 2021-02-02T07:13:22Z | 2021-02-02T07:13:22Z | null | Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
I tried this, but Dataset doesn't support assignment:
```
columns=['input_ids', 'token_type_ids', 'attention_mask', 'start_positions','end_positions']
samples.set_format(type='torch', columns = columns)
for column in columns:
samples[column].to(torch.device(self.config.device))
```
There should be an option to do so, or if there is already a way to do this, please let me know.
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1762/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1762/timeline | null | completed | null | null | false | [
"Hi ! You can get CUDA tensors with\r\n\r\n```python\r\ndataset.set_format(\"torch\", columns=columns, device=\"cuda\")\r\n```\r\n\r\nIndeed `set_format` passes the `**kwargs` to `torch.tensor`",
"Hi @lhoestq,\r\n\r\nThanks a lot. Is this true for all format types?\r\n\r\nAs in, for 'torch', I can have `**kwargs` to `torch.tensor` and for 'tf' those args are passed to `tf.Tensor`, and the same for 'numpy' and 'pandas'?",
"Yes the keywords arguments are passed to the convert function like `np.array`, `torch.tensor` or `tensorflow.ragged.constant`.\r\nWe don't support the kwargs for pandas on the other hand.",
"Thanks @lhoestq,\r\nWould it be okay if I added this to the docs and made a PR?",
"Sure ! Feel free to open a PR to improve the documentation :) ",
"Closing this issue as it has been resolved."
] |
https://api.github.com/repos/huggingface/datasets/issues/477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/477/comments | https://api.github.com/repos/huggingface/datasets/issues/477/events | https://github.com/huggingface/datasets/issues/477 | 673,142,143 | MDU6SXNzdWU2NzMxNDIxNDM= | 477 | Overview.ipynb throws exceptions with nlp 0.4.0 | [] | closed | false | null | 3 | 2020-08-04T23:18:15Z | 2021-08-03T06:02:15Z | 2021-08-03T06:02:15Z | null | with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-5-48907f2ad433> in <module>
----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}
2 labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])}
3 labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1])
4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
<ipython-input-5-48907f2ad433> in <dictcomp>(.0)
----> 1 features = {x: train_tf_dataset[x].to_tensor(default_value=0, shape=[None, tokenizer.max_len]) for x in columns[:3]}
2 labels = {"output_1": train_tf_dataset["start_positions"].to_tensor(default_value=0, shape=[None, 1])}
3 labels["output_2"] = train_tf_dataset["end_positions"].to_tensor(default_value=0, shape=[None, 1])
4 tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
AttributeError: 'numpy.ndarray' object has no attribute 'to_tensor' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/477/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/477/timeline | null | completed | null | null | false | [
"Thanks for reporting this issue\r\n\r\nThere was a bug where numpy arrays would get returned instead of tensorflow tensors.\r\nThis is fixed on master.\r\n\r\nI tried to re-run the colab and encountered this error instead:\r\n\r\n```\r\nAttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'to_tensor'\r\n```\r\n\r\nThis is because the dataset returns a Tensor and not a RaggedTensor.\r\nBut I think we should always return a RaggedTensor unless the length of the sequence is fixed (it that case they can be stack into a Tensor).",
"Hi, I got another error (on Colab):\r\n\r\n```python\r\n# You can read a few attributes of the datasets before loading them (they are python dataclasses)\r\nfrom dataclasses import asdict\r\n\r\nfor key, value in asdict(datasets[6]).items():\r\n print('👉 ' + key + ': ' + str(value))\r\n\r\n---------------------------------------------------------------------------\r\n\r\nTypeError Traceback (most recent call last)\r\n\r\n<ipython-input-6-b8ace6c227a2> in <module>()\r\n 2 from dataclasses import asdict\r\n 3 \r\n----> 4 for key, value in asdict(datasets[6]).items():\r\n 5 print('👉 ' + key + ': ' + str(value))\r\n\r\n/usr/local/lib/python3.6/dist-packages/dataclasses.py in asdict(obj, dict_factory)\r\n 1008 \"\"\"\r\n 1009 if not _is_dataclass_instance(obj):\r\n-> 1010 raise TypeError(\"asdict() should be called on dataclass instances\")\r\n 1011 return _asdict_inner(obj, dict_factory)\r\n 1012 \r\n\r\nTypeError: asdict() should be called on dataclass instances\r\n```",
"Indeed we'll update the cola with the new release coming up this week."
] |
https://api.github.com/repos/huggingface/datasets/issues/4485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4485/comments | https://api.github.com/repos/huggingface/datasets/issues/4485/events | https://github.com/huggingface/datasets/pull/4485 | 1,269,463,054 | PR_kwDODunzps45kD7A | 4,485 | Fix cast to null | [] | closed | false | null | 1 | 2022-06-13T13:44:32Z | 2022-06-14T13:43:54Z | 2022-06-14T13:34:14Z | null | It currently fails with `ArrowNotImplementedError` instead of `TypeError` when one tries to cast integer to null type.
Because if this, type inference breaks when one replaces null values with integers in `map` (it first tries to cast to the previous type before inferring the new type).
Fix https://github.com/huggingface/datasets/issues/4483 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4485/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4485/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4485.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4485",
"merged_at": "2022-06-14T13:34:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4485.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4485"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3492/comments | https://api.github.com/repos/huggingface/datasets/issues/3492/events | https://github.com/huggingface/datasets/pull/3492 | 1,089,952,943 | PR_kwDODunzps4wVufr | 3,492 | Add `gzip` for `to_json` | [] | closed | false | null | 0 | 2021-12-28T15:01:11Z | 2022-07-10T14:36:52Z | 2022-01-05T13:03:36Z | null | (Partially) closes #3480. I have added `gzip` compression for `to_json`. I realised we can run into this compression problem with `to_csv` as well. `IOHandler` can be used for `to_csv` too. Please let me know if any changes are required. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3492/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3492/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3492.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3492",
"merged_at": "2022-01-05T13:03:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3492.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3492"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2252/comments | https://api.github.com/repos/huggingface/datasets/issues/2252/events | https://github.com/huggingface/datasets/issues/2252 | 865,870,710 | MDU6SXNzdWU4NjU4NzA3MTA= | 2,252 | Slow dataloading with big datasets issue persists | [] | open | false | null | 54 | 2021-04-23T08:18:20Z | 2023-01-31T14:07:00Z | null | null | Hi,
I reported too slow data fetching when data is large(#2210) a couple of weeks ago, and @lhoestq referred me to the fix (#2122).
However, the problem seems to persist. Here is the profiled results:
1) Running with 60GB
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 517.96 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
model_backward | 0.26144 |100 | 26.144 | 5.0475 |
model_forward | 0.11123 |100 | 11.123 | 2.1474 |
get_train_batch | 0.097121 |100 | 9.7121 | 1.8751 |
```
3) Running with 600GB, datasets==1.6.0
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 4563.2 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
get_train_batch | 5.1279 |100 | 512.79 | 11.237 |
model_backward | 4.8394 |100 | 483.94 | 10.605 |
model_forward | 0.12162 |100 | 12.162 | 0.26652 |
```
I see that `get_train_batch` lags when data is large. Could this be related to different issues?
I would be happy to provide necessary information to investigate. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2252/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2252/timeline | null | null | null | null | false | [
"Hi ! Sorry to hear that. This may come from another issue then.\r\n\r\nFirst can we check if this latency comes from the dataset itself ?\r\nYou can try to load your dataset and benchmark the speed of querying random examples inside it ?\r\n```python\r\nimport time\r\nimport numpy as np\r\n\r\nfrom datasets import load_from_disk\r\n\r\ndataset = load_from_disk(...) # or from load_dataset...\r\n\r\n_start = time.time()\r\nn = 100\r\nfor i in np.random.default_rng(42).integers(0, len(dataset), size=n):\r\n _ = dataset[i]\r\nprint(time.time() - _start)\r\n```\r\n\r\nIf we see a significant speed difference between your two datasets then it would mean that there's an issue somewhere",
"Hi @lhoestq, here is the result. I additionally measured time to `load_from_disk`:\r\n* 60GB\r\n```\r\nloading took: 22.618776321411133\r\nramdom indexing 100 times took: 0.10214924812316895\r\n```\r\n\r\n* 600GB\r\n```\r\nloading took: 1176.1764674186707\r\nramdom indexing 100 times took: 2.853600025177002\r\n```\r\n\r\nHmm.. I double checked that it's version 1.6.0. The difference seems quite big, could it be related to the running environment? \r\n",
"I'm surprised by the speed change. Can you give more details about your dataset ?\r\nThe speed depends on the number of batches in the arrow tables and the distribution of the lengths of the batches.\r\nYou can access the batches by doing `dataset.data.to_batches()` (use only for debugging) (it doesn't bring data in memory).\r\n\r\nAlso can you explain what parameters you used if you used `map` calls ?\r\nAlso if you have some code that reproduces the issue I'd be happy to investigate it.",
"Also if you could give us more info about your env like your OS, version of pyarrow and if you're using an HDD or a SSD",
"Here are some details of my 600GB dataset. This is a dataset AFTER the `map` function and once I load this dataset, I do not use `map` anymore in the training. Regarding the distribution of the lengths, it is almost uniform (90% is 512 tokens, and 10% is randomly shorter than that -- typical setting for language modeling).\r\n```\r\nlen(batches):\r\n492763\r\n\r\nbatches[0]: \r\npyarrow.RecordBatch\r\nattention_mask: list<item: uint8>\r\n child 0, item: uint8\r\ninput_ids: list<item: int16>\r\n child 0, item: int16\r\nspecial_tokens_mask: list<item: uint8>\r\n child 0, item: uint8\r\ntoken_type_ids: list<item: uint8>\r\n child 0, item: uint8\r\n```\r\n\r\nHere the some parameters to `map` function just in case it is relevant:\r\n```\r\nnum_proc=1 # as multi processing is slower in my case\r\nload_from_cache_file=False\r\n```\r\n",
"Regarding the environment, I am running the code on a cloud server. Here are some info:\r\n```\r\nUbuntu 18.04.5 LTS # cat /etc/issue\r\npyarrow 3.0.0 # pip list | grep pyarrow\r\n```\r\nThe data is stored in SSD and it is mounted to the machine via Network File System.\r\n\r\nIf you could point me to some of the commands to check the details of the environment, I would be happy to provide relevant information @lhoestq !",
"I am not sure how I could provide you with the reproducible code, since the problem only arises when the data is big. For the moment, I would share the part that I think is relevant. Feel free to ask me for more info.\r\n\r\n```python\r\nclass MyModel(pytorch_lightning.LightningModule)\r\n def setup(self, stage):\r\n self.dataset = datasets.load_from_disk(path)\r\n self.dataset.set_format(\"torch\")\r\n\r\n def train_dataloader(self):\r\n collate_fn = transformers.DataCollatorForLanguageModeling(\r\n tokenizer=transformers.ElectraTokenizerFast.from_pretrained(tok_path)\r\n )\r\n dataloader = torch.utils.DataLoader(\r\n self.dataset,\r\n batch_size=32,\r\n collate_fn=collate_fn,\r\n num_workers=8,\r\n pin_memory=True,\r\n )\r\n```",
"Hi ! Sorry for the delay I haven't had a chance to take a look at this yet. Are you still experiencing this issue ?\r\nI'm asking because the latest patch release 1.6.2 fixed a few memory issues that could have lead to slow downs",
"Hi! I just ran the same code with different datasets (one is 60 GB and another 600 GB), and the latter runs much slower. ETA differs by 10x.",
"@lhoestq and @hwijeen\r\n\r\nDespite upgrading to datasets 1.6.2, still experiencing extremely slow (2h00) loading for a 300Gb local dataset shard size 1.1Gb on local HDD (40Mb/s read speed). This corresponds almost exactly to total data divided by reading speed implying that it reads the entire dataset at each load.\r\n\r\nStack details:\r\n=========\r\n\r\n> GCC version: Could not collect\r\n> Clang version: Could not collect\r\n> CMake version: Could not collect\r\n> \r\n> Python version: 3.7 (64-bit runtime)\r\n> Is CUDA available: True\r\n> CUDA runtime version: 10.2.89\r\n> GPU models and configuration: GPU 0: GeForce GTX 1050\r\n> Nvidia driver version: 457.63\r\n> cuDNN version: C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v10.2\\bin\\cudnn64_7.dll\r\n> HIP runtime version: N/A\r\n> MIOpen runtime version: N/A\r\n> \r\n> Versions of relevant libraries:\r\n> [pip3] datasets==1.6.2\r\n> [pip3] transformers==4.5.1\r\n> [pip3] numpy==1.19.1\r\n> [pip3] numpydoc==1.1.0\r\n> [pip3] pytorch-metric-learning==0.9.98\r\n> [pip3] torch==1.8.1\r\n> [pip3] torchaudio==0.8.1\r\n> [pip3] torchvision==0.2.2\r\n> [conda] blas 2.16 mkl conda-forge\r\n> [conda] cudatoolkit 10.2.89 hb195166_8 conda-forge\r\n> [conda] libblas 3.8.0 16_mkl conda-forge\r\n> [conda] libcblas 3.8.0 16_mkl conda-forge\r\n> [conda] liblapack 3.8.0 16_mkl conda-forge\r\n> [conda] liblapacke 3.8.0 16_mkl conda-forge\r\n> [conda] mkl 2020.1 216\r\n> [conda] numpy 1.19.1 py37hae9e721_0 conda-forge\r\n> [conda] numpydoc 1.1.0 py_1 conda-forge\r\n> [conda] pytorch 1.8.1 py3.7_cuda10.2_cudnn7_0 pytorch\r\n> [conda] pytorch-metric-learning 0.9.98 pyh39e3cac_0 metric-learning\r\n> [conda] torchaudio 0.8.1 py37 pytorch\r\n> [conda] torchvision 0.2.2 py_3 pytorch",
"Hi @BenoitDalFerro how do your load your dataset ?",
"Hi @lhoestq thanks for the quick turn-around, actually the plain vanilla way, without an particular knack or fashion, I tried to look into the documentation for some alternative but couldn't find any\r\n\r\n> dataset = load_from_disk(dataset_path=os.path.join(datasets_dir,dataset_dir))",
"I’m facing the same issue when loading a 900GB dataset (stored via `save_to_disk`): `load_from_disk(path_to_dir)` takes 1.5 hours and htop consistently shows high IO rates > 120 M/s.",
"@tsproisl same here, smells like ~~teen spirit~~ intended generator inadvertently ending up iterator\r\n\r\n@lhoestq perhaps solution to detect bug location in code is to track its signature via HD read usage monitoring, option is to add tracking decorator on top each function and sequentially close all hatches from top to bottom, suggest PySmart https://pypi.org/project/pySMART/ a Smartmontools implementation",
"I wasn't able to reproduce this on a toy dataset of around 300GB:\r\n\r\n```python\r\nimport datasets as ds\r\n\r\ns = ds.load_dataset(\"squad\", split=\"train\")\r\ns4000 = ds.concatenate_datasets([s] * 4000)\r\nprint(ds.utils.size_str(s4000.data.nbytes)) # '295.48 GiB'\r\n\r\ns4000.save_to_disk(\"tmp/squad_4000\")\r\n```\r\n\r\n```python\r\nimport psutil\r\nimport time\r\nfrom datasets import load_from_disk\r\n\r\ndisk = \"disk0\" # You may have to change your disk here\r\niocnt1 = psutil.disk_io_counters(perdisk=True)[disk]\r\ntime1 = time.time()\r\n\r\ns4000_reloaded = load_from_disk(\"tmp/squad_4000\")\r\n\r\ntime2 = time.time()\r\niocnt2 = psutil.disk_io_counters(perdisk=True)[disk]\r\n\r\nprint(f\"Blocks read {iocnt2.read_count - iocnt1.read_count}\") # Blocks read 18\r\nprint(f\"Elapsed time: {time2 - time1:.02f}s\") # Elapsed time: 14.60s\r\n```\r\n\r\nCould you run this on your side and tell me if how much time it takes ? Please run this when your machine is idle so that other processes don't interfere.\r\n\r\nI got these results on my macbook pro on datasets 1.6.2",
"@lhoestq thanks, test running as we speak, bear with me",
"Just tried on google colab and got ~1min for a 15GB dataset (only 200 times SQuAD), while it should be instantaneous. The time is spent reading the Apache Arrow table from the memory mapped file. This might come a virtual disk management issue. I'm trying to see if I can still speed it up on colab.",
"@lhoestq what is Google Colab's HD read speed, is it possible to introspect incl. make like SSD or HDD ?",
"@lhoestq Thank you! The issue is getting more interesting. The second script is still running, but it's definitely taking much longer than 15 seconds.",
"Okay, here’s the ouput:\r\nBlocks read 158396\r\nElapsed time: 529.10s\r\n\r\nAlso using datasets 1.6.2. Do you have any ideas, how to pinpoint the problem?",
"@lhoestq, @tsproisl mmmh still writing on my side about 1h to go, thinking on it are your large datasets all monoblock unsharded ? mine is 335 times 1.18Gb shards.",
"The 529.10s was a bit too optimistic. I cancelled the reading process once before running it completely, therefore the harddrive cache probably did its work.\r\n\r\nHere are three consecutive runs\r\nFirst run (freshly written to disk):\r\nBlocks read 309702\r\nElapsed time: 1267.74s\r\nSecond run (immediately after):\r\nBlocks read 113944\r\nElapsed time: 417.55s\r\nThird run (immediately after):\r\nBlocks read 42518\r\nElapsed time: 199.19s\r\n",
"@lhoestq \r\nFirst test\r\n> elapsed time: 11219.05s\r\n\r\nSecond test running bear with me, for Windows users slight trick to modify original \"disk0\" string:\r\n\r\nFirst find physical unit relevant key in dictionnary\r\n```\r\nimport psutil\r\npsutil.disk_io_counters(perdisk=True)\r\n```\r\n\r\n> {'PhysicalDrive0': sdiskio(read_count=18453286, write_count=4075333, read_bytes=479546467840, write_bytes=161590275072, read_time=20659, write_time=2464),\r\n> 'PhysicalDrive1': sdiskio(read_count=1495778, write_count=388781, read_bytes=548628622336, write_bytes=318234849280, read_time=426066, write_time=19085)}\r\n\r\nIn my case it's _PhysicalDrive1_\r\n\r\nThen insert relevant key's string as _disk_ variable\r\n\r\n```\r\npsutil.disk_io_counters()\r\ndisk = 'PhysicalDrive1' # You may have to change your disk here\r\niocnt1 = psutil.disk_io_counters(perdisk=True)[disk]\r\ntime1 = time.time()\r\ns4000_reloaded = load_from_disk(\"your path here\")\r\ntime2 = time.time()\r\niocnt2 = psutil.disk_io_counters(perdisk=True)[disk]\r\nprint(f\"Blocks read {iocnt2.read_count - iocnt1.read_count}\") # Blocks read 18\r\nprint(f\"Elapsed time: {time2 - time1:.02f}s\") # Elapsed time: 14.60s\r\n```",
"@lhoestq\r\nSecond test\r\n\r\n> Blocks read 1265609\r\n> Elapsed time: 11216.55s",
"@lhoestq any luck ?",
"Unfortunately no. Thanks for running the benchmark though, it shows that you machine does a lot of read operations. This is not expected: in other machines it does almost no read operations which enables a very fast loading.\r\n\r\nI did some tests on google colab and have the same issue. The first time the dataset arrow file is memory mapped takes always a lot of time (time seems linear with respect to the dataset size). Reloading the dataset is then instantaneous since the arrow file has already been memory mapped.\r\n\r\nI also tried using the Arrow IPC file format (see #1933) instead of the current streaming format that we use but it didn't help.\r\n\r\nMemory mapping is handled by the OS and depends on the disk you're using, so I'm not sure we can do much about it. I'll continue to investigate anyway, because I still don't know why in some cases it would go through the entire file (high `Blocks read ` as in your tests) and in other cases it would do almost no reading.",
"@lhoestq thanks for the effort, let's stay in touch",
"Just want to say that I am seeing the same issue. Dataset size if 268GB and it takes **3 hours** to load `load_from_disk`, using dataset version `1.9.0`. Filesystem underneath is `Lustre` ",
"Hi @lhoestq, confirmed Windows issue, exact same code running on Linux OS total loading time about 3 minutes.",
"Hmm that's different from what I got. I was on Ubuntu when reporting the initial issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/3405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3405/comments | https://api.github.com/repos/huggingface/datasets/issues/3405/events | https://github.com/huggingface/datasets/issues/3405 | 1,074,360,362 | I_kwDODunzps5ACXAq | 3,405 | ZIP format inference does not work when files located in a dir inside the archive | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-12-08T12:32:15Z | 2021-12-08T13:03:29Z | 2021-12-08T13:03:29Z | null | ## Describe the bug
When a zipped file contains archived files within a directory, the function `infer_module_for_data_files_in_archives` does not work.
It only works for files located in the root directory of the ZIP file.
## Steps to reproduce the bug
```python
infer_module_for_data_files_in_archives(["path/to/zip/file.zip"], False)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3405/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3405/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4187/comments | https://api.github.com/repos/huggingface/datasets/issues/4187/events | https://github.com/huggingface/datasets/pull/4187 | 1,209,721,532 | PR_kwDODunzps42flGp | 4,187 | Don't duplicate data when encoding audio or image | [] | closed | false | null | 5 | 2022-04-20T13:50:37Z | 2022-04-21T09:17:00Z | 2022-04-21T09:10:47Z | null | Right now if you pass both the `bytes` and a local `path` for audio or image data, then the `bytes` are unnecessarily written in the Arrow file, while we could just keep the local `path`.
This PR discards the `bytes` when the audio or image file exists locally.
In particular it's common for audio datasets builders to provide both the bytes and the local path in order to work for both streaming (using the bytes) and non-streaming mode (using a local file - which is often required for audio).
cc @patrickvonplaten | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4187/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4187/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4187.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4187",
"merged_at": "2022-04-21T09:10:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4187.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4187"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm not familiar with the concept of streaming vs non-streaming in HF datasets. I just wonder that you have the distinction here. Why doesn't it work to always make use of `bytes`? \"using a local file - which is often required for audio\" - why would that be?\r\n\r\nThe `path` would always point to some location in the `cache_dir`? I think this can be problematic. I would have expected that after I did `dataset.save_to_disk(...)` that I can remove the cache dir. But maybe just because I'm not familiar with HF. Or maybe the docs can be improved to clarify this.\r\n",
"We could always load every data file into `bytes` and save it this way the audio as bytes in `arrow` format, but the problem then would be that it makes the `file` column useless, *i.e.* people cannot inspect the audio file locally anymore or else they would need to first save bytes as a file which is not evident. This either breaks backwards compatibility or forces the user to stored 2x the required size locally. There was a longer discussion here: https://github.com/huggingface/datasets/issues/3663\r\n\r\nIt's a good argument though that `dataset.save_to_disk(...)` should save everything that is needed to the disk and should be independent of other folders, but I do think the arguments of #3663 to not break backwards compatibility and to allow people to inspect the downloaded audio files locally are a bit more important here. \r\n\r\nBut maybe, we could add a flag, `save_files_as_bytes` or `make_independent`, `make_self_contained` or a better name to `save_to_disk(...)` and `push_to_hub(...)` that would allow to make the resulting folder completely independent. ",
"What do you think @mariosasko @lhoestq @polinaeterna @anton-l ?\r\n",
"For context: you can either store the path to local images or audio files, or the bytes of those files.\r\n\r\nIf your images and audio files are local files, then the arrow file from `save_to_disk` will store paths to these files.\r\nIf you want to include the bytes or your images or audio files instead, you must `read()` those files first.\r\nThis can be done by storing the \"bytes\" instead of the \"path\" of the images or audio files.\r\n\r\nOn the other hand, the resulting Parquet files from `push_to_hub` are self-contained, so that anyone can reload the dataset from the Hub. If your dataset contains image or audio data, the Parquet files will store the bytes of your images or audio files.\r\n\r\nFor now I just updated the documentation: https://github.com/huggingface/datasets/pull/4193. Maybe we can also embed the image and audio bytes in `save_to_disk` when we implement sharding, so that is can be done as efficiently as `push_to_hub`.\r\n\r\nAnyway, merging this one :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5995/comments | https://api.github.com/repos/huggingface/datasets/issues/5995/events | https://github.com/huggingface/datasets/pull/5995 | 1,777,088,925 | PR_kwDODunzps5UCvYJ | 5,995 | Support returning dataframe in map transform | [] | closed | false | null | 4 | 2023-06-27T14:15:08Z | 2023-06-28T13:56:02Z | 2023-06-28T13:46:33Z | null | Allow returning Pandas DataFrames in `map` transforms.
(Plus, raise an error in the non-batched mode if a returned PyArrow table/Pandas DataFrame has more than one row)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5995/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5995/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5995.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5995",
"merged_at": "2023-06-28T13:46:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5995.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5995"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009725 / 0.011353 (-0.001628) | 0.006014 / 0.011008 (-0.004994) | 0.136039 / 0.038508 (0.097531) | 0.049685 / 0.023109 (0.026576) | 0.492967 / 0.275898 (0.217068) | 0.553775 / 0.323480 (0.230295) | 0.007421 / 0.007986 (-0.000564) | 0.004686 / 0.004328 (0.000357) | 0.106639 / 0.004250 (0.102389) | 0.073483 / 0.037052 (0.036431) | 0.507194 / 0.258489 (0.248705) | 0.535760 / 0.293841 (0.241919) | 0.049666 / 0.128546 (-0.078880) | 0.014139 / 0.075646 (-0.061507) | 0.435459 / 0.419271 (0.016188) | 0.076026 / 0.043533 (0.032493) | 0.454542 / 0.255139 (0.199403) | 0.512724 / 0.283200 (0.229524) | 0.034969 / 0.141683 (-0.106713) | 1.881048 / 1.452155 (0.428893) | 1.959915 / 1.492716 (0.467199) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.265322 / 0.018006 (0.247316) | 0.573963 / 0.000490 (0.573474) | 0.017493 / 0.000200 (0.017293) | 0.000637 / 0.000054 (0.000582) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028712 / 0.037411 (-0.008699) | 0.149554 / 0.014526 (0.135029) | 0.130013 / 0.176557 (-0.046544) | 0.203408 / 0.737135 (-0.533727) | 0.144778 / 0.296338 (-0.151561) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.664198 / 0.215209 (0.448989) | 6.418054 / 2.077655 (4.340399) | 2.602338 / 1.504120 (1.098219) | 2.212992 / 1.541195 (0.671797) | 2.214309 / 1.468490 (0.745819) | 0.914772 / 4.584777 (-3.670005) | 5.824831 / 3.745712 (2.079119) | 2.865381 / 5.269862 (-2.404481) | 1.906020 / 4.565676 (-2.659657) | 0.106947 / 0.424275 (-0.317328) | 0.013467 / 0.007607 (0.005860) | 0.834556 / 0.226044 (0.608512) | 8.237078 / 2.268929 (5.968150) | 3.380919 / 55.444624 (-52.063705) | 2.656713 / 6.876477 (-4.219764) | 2.834941 / 2.142072 (0.692869) | 1.151241 / 4.805227 (-3.653986) | 0.220860 / 6.500664 (-6.279804) | 0.080781 / 0.075469 (0.005312) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.655128 / 1.841788 (-0.186660) | 18.696108 / 8.074308 (10.621800) | 22.882108 / 10.191392 (12.690716) | 0.236041 / 0.680424 (-0.444383) | 0.031073 / 0.534201 (-0.503128) | 0.525263 / 0.579283 (-0.054021) | 0.632933 / 0.434364 (0.198569) | 0.707228 / 0.540337 (0.166890) | 0.753508 / 1.386936 (-0.633428) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009875 / 0.011353 (-0.001478) | 0.005135 / 0.011008 (-0.005873) | 0.101307 / 0.038508 (0.062799) | 0.044895 / 0.023109 (0.021786) | 0.497824 / 0.275898 (0.221926) | 0.573098 / 0.323480 (0.249618) | 0.006669 / 0.007986 (-0.001317) | 0.004289 / 0.004328 (-0.000039) | 0.105824 / 0.004250 (0.101573) | 0.061002 / 0.037052 (0.023950) | 0.510127 / 0.258489 (0.251638) | 0.581387 / 0.293841 (0.287546) | 0.052843 / 0.128546 (-0.075703) | 0.015506 / 0.075646 (-0.060140) | 0.116057 / 0.419271 (-0.303215) | 0.063444 / 0.043533 (0.019912) | 0.479366 / 0.255139 (0.224227) | 0.518419 / 0.283200 (0.235220) | 0.034876 / 0.141683 (-0.106806) | 2.018446 / 1.452155 (0.566292) | 1.960755 / 1.492716 (0.468039) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269077 / 0.018006 (0.251070) | 0.606059 / 0.000490 (0.605569) | 0.000488 / 0.000200 (0.000288) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032465 / 0.037411 (-0.004946) | 0.136517 / 0.014526 (0.121991) | 0.147740 / 0.176557 (-0.028816) | 0.193802 / 0.737135 (-0.543334) | 0.151876 / 0.296338 (-0.144462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.709866 / 0.215209 (0.494657) | 6.848193 / 2.077655 (4.770538) | 3.310853 / 1.504120 (1.806733) | 2.940813 / 1.541195 (1.399619) | 2.934934 / 1.468490 (1.466444) | 0.927104 / 4.584777 (-3.657673) | 5.921607 / 3.745712 (2.175895) | 4.926558 / 5.269862 (-0.343303) | 2.853269 / 4.565676 (-1.712407) | 0.120278 / 0.424275 (-0.303998) | 0.015468 / 0.007607 (0.007861) | 0.820509 / 0.226044 (0.594464) | 8.263136 / 2.268929 (5.994208) | 3.780214 / 55.444624 (-51.664410) | 3.108482 / 6.876477 (-3.767995) | 3.101544 / 2.142072 (0.959471) | 1.165539 / 4.805227 (-3.639688) | 0.229215 / 6.500664 (-6.271449) | 0.079862 / 0.075469 (0.004393) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.775071 / 1.841788 (-0.066717) | 19.327621 / 8.074308 (11.253313) | 23.057537 / 10.191392 (12.866145) | 0.250649 / 0.680424 (-0.429775) | 0.029767 / 0.534201 (-0.504434) | 0.554774 / 0.579283 (-0.024509) | 0.651919 / 0.434364 (0.217555) | 0.651641 / 0.540337 (0.111304) | 0.762386 / 1.386936 (-0.624550) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005997 / 0.011353 (-0.005356) | 0.003892 / 0.011008 (-0.007116) | 0.098020 / 0.038508 (0.059512) | 0.042584 / 0.023109 (0.019475) | 0.317909 / 0.275898 (0.042011) | 0.395042 / 0.323480 (0.071563) | 0.005358 / 0.007986 (-0.002628) | 0.003266 / 0.004328 (-0.001062) | 0.076698 / 0.004250 (0.072447) | 0.062331 / 0.037052 (0.025279) | 0.334900 / 0.258489 (0.076411) | 0.379355 / 0.293841 (0.085514) | 0.030815 / 0.128546 (-0.097731) | 0.008596 / 0.075646 (-0.067050) | 0.327739 / 0.419271 (-0.091533) | 0.054061 / 0.043533 (0.010528) | 0.311044 / 0.255139 (0.055905) | 0.336705 / 0.283200 (0.053506) | 0.022785 / 0.141683 (-0.118898) | 1.516793 / 1.452155 (0.064639) | 1.590435 / 1.492716 (0.097719) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289157 / 0.018006 (0.271151) | 0.531074 / 0.000490 (0.530585) | 0.004672 / 0.000200 (0.004472) | 0.000095 / 0.000054 (0.000040) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026173 / 0.037411 (-0.011238) | 0.105723 / 0.014526 (0.091197) | 0.118010 / 0.176557 (-0.058547) | 0.178062 / 0.737135 (-0.559073) | 0.120059 / 0.296338 (-0.176279) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.410870 / 0.215209 (0.195661) | 4.042183 / 2.077655 (1.964528) | 1.830059 / 1.504120 (0.325939) | 1.638996 / 1.541195 (0.097802) | 1.701368 / 1.468490 (0.232878) | 0.529915 / 4.584777 (-4.054861) | 3.693308 / 3.745712 (-0.052404) | 1.827875 / 5.269862 (-3.441986) | 1.063237 / 4.565676 (-3.502440) | 0.065368 / 0.424275 (-0.358907) | 0.010986 / 0.007607 (0.003379) | 0.509399 / 0.226044 (0.283354) | 5.092739 / 2.268929 (2.823810) | 2.293490 / 55.444624 (-53.151135) | 1.958742 / 6.876477 (-4.917735) | 2.024985 / 2.142072 (-0.117088) | 0.646978 / 4.805227 (-4.158249) | 0.138616 / 6.500664 (-6.362048) | 0.062101 / 0.075469 (-0.013368) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.202016 / 1.841788 (-0.639772) | 14.493204 / 8.074308 (6.418896) | 12.992160 / 10.191392 (2.800768) | 0.188922 / 0.680424 (-0.491502) | 0.017594 / 0.534201 (-0.516606) | 0.399917 / 0.579283 (-0.179367) | 0.429760 / 0.434364 (-0.004604) | 0.497906 / 0.540337 (-0.042431) | 0.608745 / 1.386936 (-0.778191) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006164 / 0.011353 (-0.005189) | 0.003980 / 0.011008 (-0.007028) | 0.074676 / 0.038508 (0.036168) | 0.041337 / 0.023109 (0.018228) | 0.400981 / 0.275898 (0.125083) | 0.448791 / 0.323480 (0.125312) | 0.004063 / 0.007986 (-0.003923) | 0.004443 / 0.004328 (0.000114) | 0.075011 / 0.004250 (0.070760) | 0.056494 / 0.037052 (0.019441) | 0.402054 / 0.258489 (0.143565) | 0.446122 / 0.293841 (0.152281) | 0.031752 / 0.128546 (-0.096794) | 0.008835 / 0.075646 (-0.066811) | 0.081226 / 0.419271 (-0.338046) | 0.051501 / 0.043533 (0.007969) | 0.383674 / 0.255139 (0.128535) | 0.405524 / 0.283200 (0.122325) | 0.025929 / 0.141683 (-0.115754) | 1.492985 / 1.452155 (0.040830) | 1.541601 / 1.492716 (0.048885) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.305149 / 0.018006 (0.287142) | 0.497259 / 0.000490 (0.496770) | 0.000420 / 0.000200 (0.000220) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027933 / 0.037411 (-0.009479) | 0.111900 / 0.014526 (0.097374) | 0.124879 / 0.176557 (-0.051678) | 0.178952 / 0.737135 (-0.558184) | 0.127698 / 0.296338 (-0.168640) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448525 / 0.215209 (0.233316) | 4.486791 / 2.077655 (2.409137) | 2.256687 / 1.504120 (0.752567) | 2.061078 / 1.541195 (0.519884) | 2.078924 / 1.468490 (0.610434) | 0.534412 / 4.584777 (-4.050365) | 3.721098 / 3.745712 (-0.024614) | 1.818735 / 5.269862 (-3.451127) | 1.104198 / 4.565676 (-3.461479) | 0.066277 / 0.424275 (-0.357998) | 0.011441 / 0.007607 (0.003834) | 0.550140 / 0.226044 (0.324095) | 5.498079 / 2.268929 (3.229150) | 2.717398 / 55.444624 (-52.727227) | 2.410194 / 6.876477 (-4.466283) | 2.405304 / 2.142072 (0.263231) | 0.665432 / 4.805227 (-4.139796) | 0.141488 / 6.500664 (-6.359177) | 0.064051 / 0.075469 (-0.011419) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272334 / 1.841788 (-0.569454) | 14.901608 / 8.074308 (6.827300) | 14.287857 / 10.191392 (4.096465) | 0.165337 / 0.680424 (-0.515086) | 0.017402 / 0.534201 (-0.516799) | 0.398120 / 0.579283 (-0.181163) | 0.416539 / 0.434364 (-0.017825) | 0.463890 / 0.540337 (-0.076447) | 0.567909 / 1.386936 (-0.819027) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009434 / 0.011353 (-0.001919) | 0.005567 / 0.011008 (-0.005441) | 0.122652 / 0.038508 (0.084144) | 0.050177 / 0.023109 (0.027067) | 0.384292 / 0.275898 (0.108394) | 0.446608 / 0.323480 (0.123128) | 0.006502 / 0.007986 (-0.001484) | 0.004523 / 0.004328 (0.000194) | 0.100581 / 0.004250 (0.096331) | 0.073615 / 0.037052 (0.036563) | 0.420179 / 0.258489 (0.161690) | 0.474631 / 0.293841 (0.180790) | 0.047942 / 0.128546 (-0.080604) | 0.013864 / 0.075646 (-0.061783) | 0.419384 / 0.419271 (0.000112) | 0.088317 / 0.043533 (0.044784) | 0.379620 / 0.255139 (0.124481) | 0.412639 / 0.283200 (0.129440) | 0.048947 / 0.141683 (-0.092736) | 1.823498 / 1.452155 (0.371343) | 1.966629 / 1.492716 (0.473913) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.300669 / 0.018006 (0.282663) | 0.593499 / 0.000490 (0.593009) | 0.007247 / 0.000200 (0.007047) | 0.000114 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030556 / 0.037411 (-0.006856) | 0.119252 / 0.014526 (0.104726) | 0.131403 / 0.176557 (-0.045153) | 0.201845 / 0.737135 (-0.535291) | 0.139350 / 0.296338 (-0.156989) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.652400 / 0.215209 (0.437191) | 6.536540 / 2.077655 (4.458886) | 2.644565 / 1.504120 (1.140445) | 2.245181 / 1.541195 (0.703986) | 2.316030 / 1.468490 (0.847540) | 0.922535 / 4.584777 (-3.662242) | 5.469065 / 3.745712 (1.723353) | 2.800489 / 5.269862 (-2.469373) | 1.749042 / 4.565676 (-2.816635) | 0.108444 / 0.424275 (-0.315831) | 0.015651 / 0.007607 (0.008044) | 0.846085 / 0.226044 (0.620041) | 8.018460 / 2.268929 (5.749531) | 3.338710 / 55.444624 (-52.105914) | 2.675998 / 6.876477 (-4.200479) | 2.918550 / 2.142072 (0.776478) | 1.135145 / 4.805227 (-3.670082) | 0.215165 / 6.500664 (-6.285499) | 0.082066 / 0.075469 (0.006597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.561661 / 1.841788 (-0.280127) | 18.519035 / 8.074308 (10.444727) | 19.046300 / 10.191392 (8.854908) | 0.236890 / 0.680424 (-0.443534) | 0.027681 / 0.534201 (-0.506520) | 0.511998 / 0.579283 (-0.067285) | 0.591627 / 0.434364 (0.157264) | 0.562021 / 0.540337 (0.021683) | 0.679354 / 1.386936 (-0.707582) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009643 / 0.011353 (-0.001710) | 0.005768 / 0.011008 (-0.005241) | 0.104430 / 0.038508 (0.065922) | 0.050044 / 0.023109 (0.026935) | 0.464117 / 0.275898 (0.188219) | 0.518439 / 0.323480 (0.194959) | 0.006935 / 0.007986 (-0.001051) | 0.004316 / 0.004328 (-0.000013) | 0.094330 / 0.004250 (0.090080) | 0.071451 / 0.037052 (0.034399) | 0.492248 / 0.258489 (0.233759) | 0.555740 / 0.293841 (0.261899) | 0.047836 / 0.128546 (-0.080711) | 0.014788 / 0.075646 (-0.060859) | 0.107590 / 0.419271 (-0.311682) | 0.064396 / 0.043533 (0.020863) | 0.451529 / 0.255139 (0.196390) | 0.475025 / 0.283200 (0.191826) | 0.040006 / 0.141683 (-0.101677) | 1.797107 / 1.452155 (0.344953) | 1.879261 / 1.492716 (0.386545) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.298458 / 0.018006 (0.280451) | 0.613022 / 0.000490 (0.612532) | 0.003582 / 0.000200 (0.003382) | 0.000106 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030179 / 0.037411 (-0.007232) | 0.123286 / 0.014526 (0.108760) | 0.132070 / 0.176557 (-0.044486) | 0.190883 / 0.737135 (-0.546252) | 0.138526 / 0.296338 (-0.157812) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.666908 / 0.215209 (0.451699) | 6.489035 / 2.077655 (4.411381) | 2.897027 / 1.504120 (1.392907) | 2.565150 / 1.541195 (1.023956) | 2.504827 / 1.468490 (1.036336) | 0.916112 / 4.584777 (-3.668665) | 5.651751 / 3.745712 (1.906039) | 2.743382 / 5.269862 (-2.526479) | 1.773338 / 4.565676 (-2.792338) | 0.128764 / 0.424275 (-0.295511) | 0.013140 / 0.007607 (0.005533) | 0.803281 / 0.226044 (0.577236) | 8.258874 / 2.268929 (5.989945) | 3.633260 / 55.444624 (-51.811364) | 2.878827 / 6.876477 (-3.997649) | 2.977178 / 2.142072 (0.835106) | 1.130467 / 4.805227 (-3.674760) | 0.226381 / 6.500664 (-6.274283) | 0.081550 / 0.075469 (0.006081) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.842927 / 1.841788 (0.001139) | 18.411520 / 8.074308 (10.337212) | 21.118228 / 10.191392 (10.926836) | 0.231526 / 0.680424 (-0.448898) | 0.029300 / 0.534201 (-0.504901) | 0.527450 / 0.579283 (-0.051834) | 0.618873 / 0.434364 (0.184509) | 0.593314 / 0.540337 (0.052976) | 0.734430 / 1.386936 (-0.652506) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3256/comments | https://api.github.com/repos/huggingface/datasets/issues/3256/events | https://github.com/huggingface/datasets/pull/3256 | 1,052,000,613 | PR_kwDODunzps4udTqg | 3,256 | asserts replaced by exception for text classification task with test. | [] | closed | false | null | 2 | 2021-11-12T14:05:36Z | 2021-11-12T15:09:33Z | 2021-11-12T14:59:32Z | null | I have replaced only a single assert in text_classification.py along with a unit test to verify an exception is raised based on https://github.com/huggingface/datasets/issues/3171 .
I would like to first understand the code contribution workflow. So keeping the change to a single file rather than making too many changes. Once this gets approved, I will look into the rest.
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3256/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3256/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3256.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3256",
"merged_at": "2021-11-12T14:59:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3256.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3256"
} | true | [
"Haha it looks like you got the chance of being reviewed twice at the same time and got the same suggestion twice x)\r\nAnyway it's all good now so we can merge !",
"Thanks for the feedback. "
] |
https://api.github.com/repos/huggingface/datasets/issues/453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/453/comments | https://api.github.com/repos/huggingface/datasets/issues/453/events | https://github.com/huggingface/datasets/pull/453 | 667,728,247 | MDExOlB1bGxSZXF1ZXN0NDU4MzQwNzky | 453 | add builder tests | [] | closed | false | null | 0 | 2020-07-29T10:22:07Z | 2020-07-29T11:14:06Z | 2020-07-29T11:14:05Z | null | I added `as_dataset` and `download_and_prepare` to the tests | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/453/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/453/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/453.diff",
"html_url": "https://github.com/huggingface/datasets/pull/453",
"merged_at": "2020-07-29T11:14:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/453.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/453"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1950/comments | https://api.github.com/repos/huggingface/datasets/issues/1950/events | https://github.com/huggingface/datasets/pull/1950 | 817,295,235 | MDExOlB1bGxSZXF1ZXN0NTgwODExMjMz | 1,950 | updated multi_nli dataset with missing fields | [] | closed | false | null | 0 | 2021-02-26T11:54:36Z | 2021-03-01T11:08:30Z | 2021-03-01T11:08:29Z | null | 1) updated fields which were missing earlier
2) added tags to README
3) updated a few fields of README
4) new dataset_infos.json and dummy files | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1950/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1950/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1950.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1950",
"merged_at": "2021-03-01T11:08:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1950.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1950"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4689/comments | https://api.github.com/repos/huggingface/datasets/issues/4689/events | https://github.com/huggingface/datasets/pull/4689 | 1,306,230,203 | PR_kwDODunzps47eyw5 | 4,689 | Test extractors for all compression formats | [] | closed | false | null | 1 | 2022-07-15T16:29:55Z | 2022-07-15T17:47:02Z | 2022-07-15T17:35:24Z | null | This PR:
- Adds all compression formats to `test_extractor`
- Tests each base extractor for all compression formats
Note that all compression formats are tested except "rar". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4689/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4689/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4689.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4689",
"merged_at": "2022-07-15T17:35:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4689.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4689"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2429/comments | https://api.github.com/repos/huggingface/datasets/issues/2429/events | https://github.com/huggingface/datasets/pull/2429 | 907,321,665 | MDExOlB1bGxSZXF1ZXN0NjU4MTg2ODc0 | 2,429 | Rename QuestionAnswering template to QuestionAnsweringExtractive | [] | closed | false | null | 1 | 2021-05-31T10:04:42Z | 2021-05-31T15:57:26Z | 2021-05-31T15:57:24Z | null | Following the discussion with @thomwolf in #2255, this PR renames the QA template to distinguish extractive vs abstractive QA. The abstractive template will be added in a future PR. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2429/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2429/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2429.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2429",
"merged_at": "2021-05-31T15:57:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2429.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2429"
} | true | [
"> I like having \"extractive\" in the name to make things explicit. However this creates an inconsistency with transformers.\r\n> \r\n> See\r\n> https://huggingface.co/transformers/task_summary.html#extractive-question-answering\r\n> \r\n> But this is minor IMO and I'm ok with this renaming\r\n\r\nyes i chose this convention because it allows us to match the `QuestionAnsweringXxx` naming and i think it's better to have `task_name-subtask_name` should auto-complete ever become part of the Hub :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1195/comments | https://api.github.com/repos/huggingface/datasets/issues/1195/events | https://github.com/huggingface/datasets/pull/1195 | 757,889,045 | MDExOlB1bGxSZXF1ZXN0NTMzMTcwMjY2 | 1,195 | addition of py_ast | [] | closed | false | null | 5 | 2020-12-06T10:00:52Z | 2020-12-08T06:19:24Z | 2020-12-08T06:19:24Z | null | The dataset consists of parsed Parsed ASTs that were used to train and evaluate the DeepSyn tool.
The Python programs are collected from GitHub repositories
by removing duplicate files, removing project forks (copy of another existing repository)
,keeping only programs that parse and have at most 30'000 nodes in the AST and
we aim to remove obfuscated files | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1195/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1195/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1195.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1195",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1195.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1195"
} | true | [
"Hi @reshinthadithyan !\r\n\r\nAs mentioned on the Slack, it would be better in this case to parse the file lines into the following feature structure:\r\n```python\r\n\"ast\": datasets.Sequence(\r\n {\r\n \"type\": datasets.Value(\"string\"),\r\n \"value\": datasets.Value(\"string\"),\r\n \"children\": datasets.Sequence(datasets.Value(\"int32\")),\r\n },\r\n)\r\n```\r\n\r\nHere are a few more things to fix before we can move forward:\r\n- the class name needs to be the CamelCase equivalent of the script name, so here it will have to be `PyAst`\r\n- the `README.md` needs to have the tags at the top\r\n- The homepage/info list at the top should be in the same format as the template (added a suggestion)\r\n- You should add the dataset tags and field description to the README as described here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\r\n\r\nGood luck, let us know if you need any help!",
"Hello @yjernite, changes have been made as we talked. Hope this would suffice. Thanks. Feel free to point out any room to improvement.",
"Good progress! Here's what still needs to be done:\r\n- first, you need to rebase to master for the tests to pass :)\r\n- the information in your `Data Fields` paragraph should go into `Data Instances`. Data fields should describe the fields one by one, as in e.g. https://github.com/huggingface/datasets/tree/master/datasets/eli5#data-fields\r\n- you still need to add the YAML tags obtained with the tagging app\r\n\r\nShould be good to go after that!",
"Hello @yjernite, changes as talked are being done.",
"Looks like this PR includes changes about many other files than the ones for py_ast\r\n\r\nCould you create another branch and another PR please ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/6067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6067/comments | https://api.github.com/repos/huggingface/datasets/issues/6067/events | https://github.com/huggingface/datasets/pull/6067 | 1,819,919,025 | PR_kwDODunzps5WT7EQ | 6,067 | fix tqdm lock | [] | closed | false | null | 3 | 2023-07-25T09:32:16Z | 2023-07-25T10:02:43Z | 2023-07-25T09:54:12Z | null | close https://github.com/huggingface/datasets/issues/6066 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6067/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6067/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6067.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6067",
"merged_at": "2023-07-25T09:54:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6067.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6067"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006578 / 0.011353 (-0.004775) | 0.003953 / 0.011008 (-0.007055) | 0.084417 / 0.038508 (0.045908) | 0.076729 / 0.023109 (0.053620) | 0.315369 / 0.275898 (0.039471) | 0.347012 / 0.323480 (0.023533) | 0.005299 / 0.007986 (-0.002686) | 0.003321 / 0.004328 (-0.001007) | 0.063954 / 0.004250 (0.059704) | 0.055810 / 0.037052 (0.018758) | 0.317651 / 0.258489 (0.059162) | 0.352603 / 0.293841 (0.058762) | 0.031355 / 0.128546 (-0.097192) | 0.008493 / 0.075646 (-0.067153) | 0.287295 / 0.419271 (-0.131977) | 0.052716 / 0.043533 (0.009183) | 0.316410 / 0.255139 (0.061271) | 0.328893 / 0.283200 (0.045693) | 0.024005 / 0.141683 (-0.117678) | 1.520333 / 1.452155 (0.068178) | 1.601268 / 1.492716 (0.108552) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205144 / 0.018006 (0.187138) | 0.459160 / 0.000490 (0.458670) | 0.000321 / 0.000200 (0.000121) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027503 / 0.037411 (-0.009908) | 0.081476 / 0.014526 (0.066950) | 0.096759 / 0.176557 (-0.079798) | 0.157888 / 0.737135 (-0.579247) | 0.094592 / 0.296338 (-0.201746) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384762 / 0.215209 (0.169553) | 3.843503 / 2.077655 (1.765849) | 1.921685 / 1.504120 (0.417565) | 1.752441 / 1.541195 (0.211246) | 1.822105 / 1.468490 (0.353615) | 0.480243 / 4.584777 (-4.104534) | 3.577220 / 3.745712 (-0.168492) | 5.047560 / 5.269862 (-0.222302) | 2.988008 / 4.565676 (-1.577669) | 0.056430 / 0.424275 (-0.367845) | 0.007180 / 0.007607 (-0.000427) | 0.458113 / 0.226044 (0.232069) | 4.584096 / 2.268929 (2.315168) | 2.395307 / 55.444624 (-53.049317) | 2.080530 / 6.876477 (-4.795947) | 2.239000 / 2.142072 (0.096927) | 0.575822 / 4.805227 (-4.229405) | 0.133303 / 6.500664 (-6.367361) | 0.059449 / 0.075469 (-0.016020) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256496 / 1.841788 (-0.585291) | 19.651614 / 8.074308 (11.577306) | 14.232480 / 10.191392 (4.041088) | 0.146461 / 0.680424 (-0.533963) | 0.018632 / 0.534201 (-0.515569) | 0.399844 / 0.579283 (-0.179439) | 0.411225 / 0.434364 (-0.023139) | 0.458203 / 0.540337 (-0.082135) | 0.669916 / 1.386936 (-0.717020) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006463 / 0.011353 (-0.004890) | 0.003898 / 0.011008 (-0.007110) | 0.064037 / 0.038508 (0.025529) | 0.071982 / 0.023109 (0.048873) | 0.361936 / 0.275898 (0.086038) | 0.393165 / 0.323480 (0.069685) | 0.005207 / 0.007986 (-0.002779) | 0.003231 / 0.004328 (-0.001098) | 0.064318 / 0.004250 (0.060068) | 0.055776 / 0.037052 (0.018724) | 0.383087 / 0.258489 (0.124598) | 0.402428 / 0.293841 (0.108587) | 0.031587 / 0.128546 (-0.096959) | 0.008527 / 0.075646 (-0.067119) | 0.070495 / 0.419271 (-0.348777) | 0.048806 / 0.043533 (0.005273) | 0.369932 / 0.255139 (0.114793) | 0.385268 / 0.283200 (0.102068) | 0.023183 / 0.141683 (-0.118500) | 1.491175 / 1.452155 (0.039020) | 1.534191 / 1.492716 (0.041475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224526 / 0.018006 (0.206520) | 0.445460 / 0.000490 (0.444970) | 0.003612 / 0.000200 (0.003412) | 0.000089 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029829 / 0.037411 (-0.007583) | 0.087951 / 0.014526 (0.073425) | 0.100069 / 0.176557 (-0.076487) | 0.154944 / 0.737135 (-0.582192) | 0.101271 / 0.296338 (-0.195067) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412385 / 0.215209 (0.197175) | 4.108038 / 2.077655 (2.030384) | 2.163578 / 1.504120 (0.659459) | 2.031934 / 1.541195 (0.490740) | 2.155857 / 1.468490 (0.687367) | 0.481132 / 4.584777 (-4.103645) | 3.620868 / 3.745712 (-0.124844) | 5.222175 / 5.269862 (-0.047687) | 3.115637 / 4.565676 (-1.450039) | 0.056480 / 0.424275 (-0.367795) | 0.007761 / 0.007607 (0.000154) | 0.483553 / 0.226044 (0.257509) | 4.830087 / 2.268929 (2.561159) | 2.629919 / 55.444624 (-52.814705) | 2.327551 / 6.876477 (-4.548926) | 2.539934 / 2.142072 (0.397861) | 0.587963 / 4.805227 (-4.217265) | 0.131085 / 6.500664 (-6.369579) | 0.060807 / 0.075469 (-0.014662) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350003 / 1.841788 (-0.491785) | 19.491713 / 8.074308 (11.417405) | 14.030429 / 10.191392 (3.839037) | 0.174762 / 0.680424 (-0.505662) | 0.018523 / 0.534201 (-0.515678) | 0.394946 / 0.579283 (-0.184337) | 0.407652 / 0.434364 (-0.026712) | 0.465806 / 0.540337 (-0.074531) | 0.605417 / 1.386936 (-0.781519) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006235 / 0.011353 (-0.005118) | 0.003675 / 0.011008 (-0.007333) | 0.080680 / 0.038508 (0.042171) | 0.064378 / 0.023109 (0.041268) | 0.394312 / 0.275898 (0.118414) | 0.428143 / 0.323480 (0.104663) | 0.004794 / 0.007986 (-0.003191) | 0.002899 / 0.004328 (-0.001429) | 0.062592 / 0.004250 (0.058342) | 0.050957 / 0.037052 (0.013904) | 0.396831 / 0.258489 (0.138342) | 0.438280 / 0.293841 (0.144439) | 0.027743 / 0.128546 (-0.100804) | 0.008068 / 0.075646 (-0.067578) | 0.262541 / 0.419271 (-0.156730) | 0.060837 / 0.043533 (0.017304) | 0.397941 / 0.255139 (0.142802) | 0.417012 / 0.283200 (0.133813) | 0.030153 / 0.141683 (-0.111530) | 1.477115 / 1.452155 (0.024960) | 1.516642 / 1.492716 (0.023926) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178032 / 0.018006 (0.160026) | 0.445775 / 0.000490 (0.445286) | 0.004275 / 0.000200 (0.004075) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025025 / 0.037411 (-0.012386) | 0.074113 / 0.014526 (0.059587) | 0.083814 / 0.176557 (-0.092743) | 0.148860 / 0.737135 (-0.588275) | 0.085408 / 0.296338 (-0.210931) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393714 / 0.215209 (0.178505) | 3.936589 / 2.077655 (1.858934) | 1.910501 / 1.504120 (0.406381) | 1.729670 / 1.541195 (0.188475) | 1.777647 / 1.468490 (0.309156) | 0.499532 / 4.584777 (-4.085245) | 3.002385 / 3.745712 (-0.743327) | 2.906916 / 5.269862 (-2.362945) | 1.883321 / 4.565676 (-2.682356) | 0.057546 / 0.424275 (-0.366730) | 0.006492 / 0.007607 (-0.001115) | 0.463605 / 0.226044 (0.237560) | 4.620215 / 2.268929 (2.351287) | 2.399021 / 55.444624 (-53.045603) | 2.182962 / 6.876477 (-4.693514) | 2.357344 / 2.142072 (0.215272) | 0.583946 / 4.805227 (-4.221282) | 0.124644 / 6.500664 (-6.376021) | 0.060831 / 0.075469 (-0.014638) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276412 / 1.841788 (-0.565375) | 18.462522 / 8.074308 (10.388214) | 13.877375 / 10.191392 (3.685983) | 0.150584 / 0.680424 (-0.529840) | 0.016675 / 0.534201 (-0.517526) | 0.331711 / 0.579283 (-0.247573) | 0.366659 / 0.434364 (-0.067705) | 0.396400 / 0.540337 (-0.143938) | 0.555418 / 1.386936 (-0.831518) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005995 / 0.011353 (-0.005358) | 0.003610 / 0.011008 (-0.007399) | 0.061802 / 0.038508 (0.023294) | 0.059265 / 0.023109 (0.036156) | 0.392628 / 0.275898 (0.116730) | 0.413143 / 0.323480 (0.089663) | 0.004687 / 0.007986 (-0.003299) | 0.002843 / 0.004328 (-0.001486) | 0.061932 / 0.004250 (0.057682) | 0.049466 / 0.037052 (0.012413) | 0.402718 / 0.258489 (0.144229) | 0.415039 / 0.293841 (0.121198) | 0.027352 / 0.128546 (-0.101194) | 0.007965 / 0.075646 (-0.067682) | 0.067456 / 0.419271 (-0.351815) | 0.042336 / 0.043533 (-0.001196) | 0.405543 / 0.255139 (0.150404) | 0.403209 / 0.283200 (0.120010) | 0.021459 / 0.141683 (-0.120224) | 1.442861 / 1.452155 (-0.009293) | 1.491213 / 1.492716 (-0.001503) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248225 / 0.018006 (0.230219) | 0.434174 / 0.000490 (0.433684) | 0.001973 / 0.000200 (0.001773) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025475 / 0.037411 (-0.011936) | 0.077865 / 0.014526 (0.063339) | 0.086980 / 0.176557 (-0.089577) | 0.143682 / 0.737135 (-0.593453) | 0.088634 / 0.296338 (-0.207705) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417591 / 0.215209 (0.202382) | 4.168700 / 2.077655 (2.091045) | 2.335743 / 1.504120 (0.831623) | 2.208174 / 1.541195 (0.666980) | 2.256658 / 1.468490 (0.788168) | 0.503164 / 4.584777 (-4.081613) | 3.026667 / 3.745712 (-0.719045) | 4.496675 / 5.269862 (-0.773187) | 2.741049 / 4.565676 (-1.824628) | 0.057781 / 0.424275 (-0.366494) | 0.006810 / 0.007607 (-0.000797) | 0.490803 / 0.226044 (0.264759) | 4.914369 / 2.268929 (2.645441) | 2.594250 / 55.444624 (-52.850375) | 2.274552 / 6.876477 (-4.601925) | 2.397529 / 2.142072 (0.255456) | 0.593008 / 4.805227 (-4.212220) | 0.126194 / 6.500664 (-6.374470) | 0.062261 / 0.075469 (-0.013208) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.357561 / 1.841788 (-0.484227) | 18.622995 / 8.074308 (10.548687) | 14.142569 / 10.191392 (3.951177) | 0.146527 / 0.680424 (-0.533897) | 0.016863 / 0.534201 (-0.517338) | 0.336219 / 0.579283 (-0.243064) | 0.348650 / 0.434364 (-0.085714) | 0.385958 / 0.540337 (-0.154380) | 0.517958 / 1.386936 (-0.868978) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5279/comments | https://api.github.com/repos/huggingface/datasets/issues/5279/events | https://github.com/huggingface/datasets/pull/5279 | 1,459,635,002 | PR_kwDODunzps5Dcoue | 5,279 | Warn about checksums | [] | closed | false | null | 3 | 2022-11-22T10:58:48Z | 2022-11-23T11:43:50Z | 2022-11-23T09:47:02Z | null | It takes a lot of time on big datasets to compute the checksums, we should at least add a warning to notify the user about this step. I also mentioned how to disable it, and added a tqdm bar (delay=5 seconds)
cc @ola13 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5279/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5279/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5279.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5279",
"merged_at": "2022-11-23T09:47:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5279.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5279"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I'm also in favor of disabling this by default - it's kinda impractical",
"Great, thanks for the quick turnaround on this!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4738/comments | https://api.github.com/repos/huggingface/datasets/issues/4738/events | https://github.com/huggingface/datasets/pull/4738 | 1,315,222,166 | PR_kwDODunzps479hq4 | 4,738 | Use CI unit/integration tests | [] | closed | false | null | 2 | 2022-07-22T16:48:00Z | 2022-07-26T20:19:22Z | 2022-07-26T20:07:05Z | null | This PR:
- Implements separate unit/integration tests
- A fail in integration tests does not cancel the rest of the jobs
- We should implement more robust integration tests: work in progress in a subsequent PR
- For the moment, test involving network requests are marked as integration: to be evolved | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4738/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4738/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4738.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4738",
"merged_at": "2022-07-26T20:07:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4738.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4738"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think this PR can be merged. Willing to see it in action.\r\n\r\nCC: @lhoestq "
] |
https://api.github.com/repos/huggingface/datasets/issues/23 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/23/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/23/comments | https://api.github.com/repos/huggingface/datasets/issues/23/events | https://github.com/huggingface/datasets/pull/23 | 608,508,706 | MDExOlB1bGxSZXF1ZXN0NDEwMjczOTU2 | 23 | Add metrics | [] | closed | false | null | 0 | 2020-04-28T18:02:05Z | 2022-10-04T09:31:56Z | 2020-05-11T08:19:38Z | null | This PR is a draft for adding metrics (sacrebleu and seqeval are added)
use case examples:
`import nlp`
**sacrebleu:**
```
refs = [['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],
['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.']]
sys = ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.']
sacrebleu = nlp.load_metrics('sacrebleu')
print(sacrebleu.score)
```
**seqeval:**
```
y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
seqeval = nlp.load_metrics('seqeval')
print(seqeval.accuracy_score(y_true, y_pred)
print(seqeval.f1_score(y_true, y_pred)
```
_examples are taken from the corresponding web page_
your comments and suggestions are more than welcomed
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/23/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/23/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/23.diff",
"html_url": "https://github.com/huggingface/datasets/pull/23",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/23.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/23"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1607 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1607/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1607/comments | https://api.github.com/repos/huggingface/datasets/issues/1607/events | https://github.com/huggingface/datasets/pull/1607 | 771,325,852 | MDExOlB1bGxSZXF1ZXN0NTQyODg5OTky | 1,607 | modified tweets hate speech detection | [] | closed | false | null | 0 | 2020-12-19T07:13:40Z | 2020-12-21T16:08:48Z | 2020-12-21T16:08:48Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1607/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1607/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1607.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1607",
"merged_at": "2020-12-21T16:08:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1607.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1607"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/4437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4437/comments | https://api.github.com/repos/huggingface/datasets/issues/4437/events | https://github.com/huggingface/datasets/pull/4437 | 1,258,249,582 | PR_kwDODunzps44-uRW | 4,437 | Add missing columns to `blended_skill_talk` | [] | closed | false | null | 1 | 2022-06-02T14:16:26Z | 2022-06-06T15:49:56Z | 2022-06-06T15:41:25Z | null | Adds the missing columns to `blended_skill_talk` to align the loading logic with [ParlAI](https://github.com/facebookresearch/ParlAI/blob/main/parlai/tasks/blended_skill_talk/build.py).
Fix #4426 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4437/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4437/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4437.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4437",
"merged_at": "2022-06-06T15:41:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4437.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4437"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2650/comments | https://api.github.com/repos/huggingface/datasets/issues/2650/events | https://github.com/huggingface/datasets/issues/2650 | 944,672,565 | MDU6SXNzdWU5NDQ2NzI1NjU= | 2,650 | [load_dataset] shard and parallelize the process | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 3 | 2021-07-14T18:04:58Z | 2022-10-12T20:05:07Z | null | null | - Some huge datasets take forever to build the first time. (e.g. oscar/en) as it's done in a single cpu core.
- If the build crashes, everything done up to that point gets lost
Request: Shard the build over multiple arrow files, which would enable:
- much faster build by parallelizing the build process
- if the process crashed, the completed arrow files don't need to be re-built again
Thank you!
@lhoestq | {
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 3,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 10,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2650/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2650/timeline | null | null | null | null | false | [
"I need the same feature for distributed training",
"I think @TevenLeScao is exploring adding multiprocessing in `GeneratorBasedBuilder._prepare_split` - feel free to post updates here :)",
"Posted a PR to address the building side, still needs something to load sharded arrow files + tests"
] |
https://api.github.com/repos/huggingface/datasets/issues/462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/462/comments | https://api.github.com/repos/huggingface/datasets/issues/462/events | https://github.com/huggingface/datasets/pull/462 | 669,715,547 | MDExOlB1bGxSZXF1ZXN0NDYwMDU0NDgz | 462 | add DoQA (ACL 2020) dataset | [] | closed | false | null | 0 | 2020-07-31T11:25:56Z | 2020-08-03T11:28:27Z | 2020-08-03T11:28:27Z | null | adds DoQA (ACL 2020) dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/462/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/462/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/462.diff",
"html_url": "https://github.com/huggingface/datasets/pull/462",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/462.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/462"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2386/comments | https://api.github.com/repos/huggingface/datasets/issues/2386/events | https://github.com/huggingface/datasets/issues/2386 | 897,560,049 | MDU6SXNzdWU4OTc1NjAwNDk= | 2,386 | Accessing Arrow dataset cache_files | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-05-20T23:57:43Z | 2021-05-21T19:18:03Z | 2021-05-21T19:18:03Z | null | ## Describe the bug
In datasets 1.5.0 the following code snippet would have printed the cache_files:
```
train_data = load_dataset('conll2003', split='train', cache_dir='data')
print(train_data.cache_files[0]['filename'])
```
However, in the newest release (1.6.1), it prints an empty list.
I also tried loading the dataset with `keep_in_memory=True` argument but still `cache_files` is empty.
Was wondering if this is a bug or I need to pass additional arguments so I can access the cache_files.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2386/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2386/timeline | null | completed | null | null | false | [
"Thanks @bhavitvyamalik for referencing the workaround. Setting `keep_in_memory=False` is working."
] |
https://api.github.com/repos/huggingface/datasets/issues/1427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1427/comments | https://api.github.com/repos/huggingface/datasets/issues/1427/events | https://github.com/huggingface/datasets/pull/1427 | 760,736,703 | MDExOlB1bGxSZXF1ZXN0NTM1NTE4MzAx | 1,427 | Hebrew project BenYehuda | [] | closed | false | null | 1 | 2020-12-09T22:59:17Z | 2020-12-11T17:39:23Z | 2020-12-11T17:39:23Z | null | Added Hebrew corpus from https://github.com/projectbenyehuda/public_domain_dump | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1427/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1427/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1427.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1427",
"merged_at": "2020-12-11T17:39:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1427.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1427"
} | true | [
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/3197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3197/comments | https://api.github.com/repos/huggingface/datasets/issues/3197/events | https://github.com/huggingface/datasets/pull/3197 | 1,042,541,127 | PR_kwDODunzps4t_cry | 3,197 | Fix optimized encoding for arrays | [] | closed | false | null | 0 | 2021-11-02T15:55:53Z | 2021-11-02T19:12:24Z | 2021-11-02T19:12:23Z | null | Hi !
#3124 introduced a regression that made the benchmarks CI fail because of a bad array comparison when checking the first encoded element. This PR fixes this by making sure that encoding is applied on all sequence types except lists.
cc @eladsegal fyi (no big deal) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3197/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3197/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3197.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3197",
"merged_at": "2021-11-02T19:12:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3197.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3197"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4538/comments | https://api.github.com/repos/huggingface/datasets/issues/4538/events | https://github.com/huggingface/datasets/issues/4538 | 1,279,409,786 | I_kwDODunzps5MQj56 | 4,538 | Dataset Viewer issue for Pile of Law | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 5 | 2022-06-22T02:48:40Z | 2022-06-27T07:30:23Z | 2022-06-26T22:26:22Z | null | ### Link
https://huggingface.co/datasets/pile-of-law/pile-of-law
### Description
Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines and so would like to turn off dataset previews. But we do not want to collect user emails because it would violate single blind review, allowing us to deduce potential reviewers' identities. Is there a way that we can turn off the dataset viewer without collecting identity information?
Thanks so much!
### Owner
Yes | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4538/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4538/timeline | null | completed | null | null | false | [
"Hi @Breakend, yes – we'll propose a solution today",
"Thanks so much, I appreciate it!",
"Thanks so much for adding the docs. I was able to successfully hide the viewer using the \r\n```\r\nviewer: false\r\n```\r\nflag in the README.md of the dataset. I'm closing the issue because this is resolved. Thanks again!",
"Awesome! Thanks for confirming. cc @severo ",
"Just for the record:\r\n\r\n- the doc\r\n \r\n<img width=\"1430\" alt=\"Capture d’écran 2022-06-27 à 09 29 27\" src=\"https://user-images.githubusercontent.com/1676121/175884089-bca6c0d5-6387-473e-98ca-86a910ede4bd.png\">\r\n\r\n- the dataset main page\r\n\r\n<img width=\"1134\" alt=\"Capture d’écran 2022-06-27 à 09 29 05\" src=\"https://user-images.githubusercontent.com/1676121/175884152-5f285bf0-3471-45de-927a-e141b00ebb33.png\">\r\n\r\n- the dataset viewer page\r\n\r\n<img width=\"567\" alt=\"Capture d’écran 2022-06-27 à 09 29 16\" src=\"https://user-images.githubusercontent.com/1676121/175884191-ab6a297b-1c11-417e-bbde-0b7623278a79.png\">\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5994/comments | https://api.github.com/repos/huggingface/datasets/issues/5994/events | https://github.com/huggingface/datasets/pull/5994 | 1,776,829,004 | PR_kwDODunzps5UB1cA | 5,994 | Fix select_columns columns order | [] | closed | false | null | 4 | 2023-06-27T12:32:46Z | 2023-06-27T15:40:47Z | 2023-06-27T15:32:43Z | null | Fix the order of the columns in dataset.features when the order changes with `dataset.select_columns()`.
I also fixed the same issue for `dataset.flatten()`
Close https://github.com/huggingface/datasets/issues/5993 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5994/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5994/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5994.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5994",
"merged_at": "2023-06-27T15:32:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5994.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5994"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005969 / 0.011353 (-0.005384) | 0.003687 / 0.011008 (-0.007321) | 0.100843 / 0.038508 (0.062335) | 0.036912 / 0.023109 (0.013803) | 0.312389 / 0.275898 (0.036491) | 0.370335 / 0.323480 (0.046855) | 0.003434 / 0.007986 (-0.004552) | 0.003710 / 0.004328 (-0.000619) | 0.076899 / 0.004250 (0.072648) | 0.053647 / 0.037052 (0.016594) | 0.324825 / 0.258489 (0.066336) | 0.367711 / 0.293841 (0.073870) | 0.028079 / 0.128546 (-0.100467) | 0.008326 / 0.075646 (-0.067320) | 0.312342 / 0.419271 (-0.106930) | 0.047423 / 0.043533 (0.003890) | 0.321063 / 0.255139 (0.065924) | 0.336508 / 0.283200 (0.053308) | 0.019973 / 0.141683 (-0.121710) | 1.529334 / 1.452155 (0.077179) | 1.573746 / 1.492716 (0.081030) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210849 / 0.018006 (0.192843) | 0.418798 / 0.000490 (0.418309) | 0.007347 / 0.000200 (0.007147) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022718 / 0.037411 (-0.014694) | 0.098400 / 0.014526 (0.083874) | 0.106590 / 0.176557 (-0.069967) | 0.168460 / 0.737135 (-0.568675) | 0.108401 / 0.296338 (-0.187938) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443066 / 0.215209 (0.227857) | 4.416658 / 2.077655 (2.339003) | 2.088844 / 1.504120 (0.584724) | 1.879564 / 1.541195 (0.338369) | 1.933815 / 1.468490 (0.465325) | 0.565085 / 4.584777 (-4.019692) | 3.412440 / 3.745712 (-0.333273) | 1.754686 / 5.269862 (-3.515175) | 1.024576 / 4.565676 (-3.541100) | 0.067909 / 0.424275 (-0.356366) | 0.011054 / 0.007607 (0.003447) | 0.534748 / 0.226044 (0.308703) | 5.351457 / 2.268929 (3.082529) | 2.517368 / 55.444624 (-52.927256) | 2.182762 / 6.876477 (-4.693715) | 2.238205 / 2.142072 (0.096133) | 0.672962 / 4.805227 (-4.132265) | 0.136098 / 6.500664 (-6.364566) | 0.066534 / 0.075469 (-0.008935) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281241 / 1.841788 (-0.560547) | 13.872881 / 8.074308 (5.798573) | 13.161023 / 10.191392 (2.969631) | 0.130011 / 0.680424 (-0.550412) | 0.016759 / 0.534201 (-0.517442) | 0.359802 / 0.579283 (-0.219481) | 0.392577 / 0.434364 (-0.041787) | 0.427742 / 0.540337 (-0.112595) | 0.522241 / 1.386936 (-0.864695) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005985 / 0.011353 (-0.005368) | 0.003705 / 0.011008 (-0.007304) | 0.077699 / 0.038508 (0.039191) | 0.035686 / 0.023109 (0.012577) | 0.420356 / 0.275898 (0.144458) | 0.476753 / 0.323480 (0.153273) | 0.003510 / 0.007986 (-0.004475) | 0.002807 / 0.004328 (-0.001521) | 0.077151 / 0.004250 (0.072901) | 0.046420 / 0.037052 (0.009368) | 0.391781 / 0.258489 (0.133292) | 0.461128 / 0.293841 (0.167287) | 0.027847 / 0.128546 (-0.100699) | 0.008322 / 0.075646 (-0.067324) | 0.082768 / 0.419271 (-0.336503) | 0.042629 / 0.043533 (-0.000904) | 0.405745 / 0.255139 (0.150606) | 0.430797 / 0.283200 (0.147598) | 0.019832 / 0.141683 (-0.121851) | 1.556208 / 1.452155 (0.104054) | 1.612166 / 1.492716 (0.119450) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230633 / 0.018006 (0.212626) | 0.401667 / 0.000490 (0.401178) | 0.000776 / 0.000200 (0.000576) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024959 / 0.037411 (-0.012452) | 0.100560 / 0.014526 (0.086034) | 0.109175 / 0.176557 (-0.067382) | 0.159919 / 0.737135 (-0.577217) | 0.112810 / 0.296338 (-0.183528) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.460601 / 0.215209 (0.245392) | 4.620039 / 2.077655 (2.542385) | 2.257900 / 1.504120 (0.753780) | 2.039192 / 1.541195 (0.497997) | 2.064451 / 1.468490 (0.595961) | 0.557887 / 4.584777 (-4.026890) | 3.356100 / 3.745712 (-0.389612) | 1.703578 / 5.269862 (-3.566284) | 1.024984 / 4.565676 (-3.540693) | 0.067602 / 0.424275 (-0.356673) | 0.011450 / 0.007607 (0.003842) | 0.563230 / 0.226044 (0.337186) | 5.632150 / 2.268929 (3.363221) | 2.698701 / 55.444624 (-52.745924) | 2.363218 / 6.876477 (-4.513259) | 2.363997 / 2.142072 (0.221925) | 0.671260 / 4.805227 (-4.133967) | 0.136166 / 6.500664 (-6.364499) | 0.067094 / 0.075469 (-0.008375) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.303030 / 1.841788 (-0.538757) | 14.137277 / 8.074308 (6.062969) | 13.937631 / 10.191392 (3.746239) | 0.162626 / 0.680424 (-0.517798) | 0.016687 / 0.534201 (-0.517514) | 0.363657 / 0.579283 (-0.215626) | 0.392021 / 0.434364 (-0.042343) | 0.427275 / 0.540337 (-0.113062) | 0.512192 / 1.386936 (-0.874744) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005974 / 0.011353 (-0.005378) | 0.003947 / 0.011008 (-0.007061) | 0.098604 / 0.038508 (0.060096) | 0.036947 / 0.023109 (0.013838) | 0.311844 / 0.275898 (0.035946) | 0.375243 / 0.323480 (0.051763) | 0.003453 / 0.007986 (-0.004533) | 0.003834 / 0.004328 (-0.000495) | 0.077943 / 0.004250 (0.073692) | 0.052956 / 0.037052 (0.015904) | 0.320812 / 0.258489 (0.062323) | 0.373963 / 0.293841 (0.080122) | 0.028382 / 0.128546 (-0.100164) | 0.008525 / 0.075646 (-0.067121) | 0.311306 / 0.419271 (-0.107965) | 0.047029 / 0.043533 (0.003496) | 0.309933 / 0.255139 (0.054794) | 0.335114 / 0.283200 (0.051915) | 0.019629 / 0.141683 (-0.122054) | 1.569771 / 1.452155 (0.117617) | 1.585899 / 1.492716 (0.093182) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216565 / 0.018006 (0.198559) | 0.426717 / 0.000490 (0.426228) | 0.003609 / 0.000200 (0.003409) | 0.000077 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023079 / 0.037411 (-0.014332) | 0.096954 / 0.014526 (0.082428) | 0.105398 / 0.176557 (-0.071158) | 0.165433 / 0.737135 (-0.571703) | 0.109703 / 0.296338 (-0.186636) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.456227 / 0.215209 (0.241018) | 4.529857 / 2.077655 (2.452202) | 2.214054 / 1.504120 (0.709934) | 2.029716 / 1.541195 (0.488521) | 2.081175 / 1.468490 (0.612685) | 0.563642 / 4.584777 (-4.021135) | 3.355393 / 3.745712 (-0.390320) | 1.765938 / 5.269862 (-3.503924) | 1.039062 / 4.565676 (-3.526615) | 0.067952 / 0.424275 (-0.356323) | 0.011044 / 0.007607 (0.003437) | 0.556935 / 0.226044 (0.330890) | 5.588167 / 2.268929 (3.319239) | 2.667217 / 55.444624 (-52.777407) | 2.337383 / 6.876477 (-4.539094) | 2.429590 / 2.142072 (0.287517) | 0.676972 / 4.805227 (-4.128256) | 0.135782 / 6.500664 (-6.364882) | 0.066323 / 0.075469 (-0.009146) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.237358 / 1.841788 (-0.604429) | 13.910492 / 8.074308 (5.836184) | 13.227275 / 10.191392 (3.035883) | 0.146857 / 0.680424 (-0.533567) | 0.016991 / 0.534201 (-0.517210) | 0.363637 / 0.579283 (-0.215646) | 0.392462 / 0.434364 (-0.041902) | 0.450009 / 0.540337 (-0.090329) | 0.536077 / 1.386936 (-0.850859) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006067 / 0.011353 (-0.005286) | 0.003851 / 0.011008 (-0.007158) | 0.078462 / 0.038508 (0.039954) | 0.036221 / 0.023109 (0.013112) | 0.389195 / 0.275898 (0.113297) | 0.428710 / 0.323480 (0.105230) | 0.004645 / 0.007986 (-0.003341) | 0.002973 / 0.004328 (-0.001355) | 0.078299 / 0.004250 (0.074048) | 0.047076 / 0.037052 (0.010024) | 0.375673 / 0.258489 (0.117184) | 0.432352 / 0.293841 (0.138511) | 0.028212 / 0.128546 (-0.100334) | 0.008475 / 0.075646 (-0.067172) | 0.083902 / 0.419271 (-0.335369) | 0.046699 / 0.043533 (0.003166) | 0.364502 / 0.255139 (0.109363) | 0.389792 / 0.283200 (0.106592) | 0.025266 / 0.141683 (-0.116417) | 1.517458 / 1.452155 (0.065303) | 1.543634 / 1.492716 (0.050918) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.236479 / 0.018006 (0.218472) | 0.411528 / 0.000490 (0.411038) | 0.005213 / 0.000200 (0.005013) | 0.000091 / 0.000054 (0.000036) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025764 / 0.037411 (-0.011647) | 0.103174 / 0.014526 (0.088648) | 0.110609 / 0.176557 (-0.065948) | 0.164630 / 0.737135 (-0.572506) | 0.114863 / 0.296338 (-0.181475) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457155 / 0.215209 (0.241946) | 4.550675 / 2.077655 (2.473021) | 2.350473 / 1.504120 (0.846353) | 2.204919 / 1.541195 (0.663724) | 2.076724 / 1.468490 (0.608234) | 0.563107 / 4.584777 (-4.021670) | 3.390669 / 3.745712 (-0.355043) | 1.741111 / 5.269862 (-3.528751) | 1.033268 / 4.565676 (-3.532408) | 0.068400 / 0.424275 (-0.355875) | 0.011607 / 0.007607 (0.004000) | 0.561944 / 0.226044 (0.335900) | 5.620224 / 2.268929 (3.351296) | 2.705241 / 55.444624 (-52.739384) | 2.344520 / 6.876477 (-4.531957) | 2.386119 / 2.142072 (0.244046) | 0.681583 / 4.805227 (-4.123644) | 0.137272 / 6.500664 (-6.363392) | 0.069217 / 0.075469 (-0.006252) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.322690 / 1.841788 (-0.519098) | 14.464953 / 8.074308 (6.390645) | 14.269350 / 10.191392 (4.077958) | 0.158879 / 0.680424 (-0.521545) | 0.016722 / 0.534201 (-0.517479) | 0.360299 / 0.579283 (-0.218984) | 0.391609 / 0.434364 (-0.042755) | 0.420507 / 0.540337 (-0.119831) | 0.512822 / 1.386936 (-0.874114) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007106 / 0.011353 (-0.004247) | 0.005224 / 0.011008 (-0.005784) | 0.127563 / 0.038508 (0.089055) | 0.055067 / 0.023109 (0.031958) | 0.418660 / 0.275898 (0.142761) | 0.487891 / 0.323480 (0.164411) | 0.005712 / 0.007986 (-0.002274) | 0.004585 / 0.004328 (0.000256) | 0.090994 / 0.004250 (0.086743) | 0.071837 / 0.037052 (0.034784) | 0.446957 / 0.258489 (0.188468) | 0.475966 / 0.293841 (0.182125) | 0.038062 / 0.128546 (-0.090484) | 0.010056 / 0.075646 (-0.065590) | 0.406796 / 0.419271 (-0.012475) | 0.066542 / 0.043533 (0.023009) | 0.413676 / 0.255139 (0.158537) | 0.448624 / 0.283200 (0.165424) | 0.030332 / 0.141683 (-0.111351) | 1.895307 / 1.452155 (0.443152) | 1.904411 / 1.492716 (0.411694) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221246 / 0.018006 (0.203240) | 0.461288 / 0.000490 (0.460799) | 0.005957 / 0.000200 (0.005757) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029255 / 0.037411 (-0.008156) | 0.131299 / 0.014526 (0.116773) | 0.135814 / 0.176557 (-0.040742) | 0.201342 / 0.737135 (-0.535793) | 0.141748 / 0.296338 (-0.154591) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.463936 / 0.215209 (0.248727) | 4.709621 / 2.077655 (2.631966) | 2.093844 / 1.504120 (0.589724) | 1.897963 / 1.541195 (0.356768) | 1.927865 / 1.468490 (0.459375) | 0.610879 / 4.584777 (-3.973898) | 4.481370 / 3.745712 (0.735658) | 2.112235 / 5.269862 (-3.157627) | 1.203349 / 4.565676 (-3.362327) | 0.074828 / 0.424275 (-0.349447) | 0.013121 / 0.007607 (0.005514) | 0.580894 / 0.226044 (0.354849) | 5.801872 / 2.268929 (3.532943) | 2.579950 / 55.444624 (-52.864674) | 2.251569 / 6.876477 (-4.624908) | 2.421305 / 2.142072 (0.279232) | 0.760938 / 4.805227 (-4.044289) | 0.169554 / 6.500664 (-6.331110) | 0.077499 / 0.075469 (0.002030) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.410419 / 1.841788 (-0.431368) | 17.442331 / 8.074308 (9.368023) | 15.782183 / 10.191392 (5.590791) | 0.180649 / 0.680424 (-0.499775) | 0.021790 / 0.534201 (-0.512411) | 0.511040 / 0.579283 (-0.068243) | 0.510472 / 0.434364 (0.076108) | 0.607141 / 0.540337 (0.066804) | 0.724794 / 1.386936 (-0.662142) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007280 / 0.011353 (-0.004073) | 0.004712 / 0.011008 (-0.006296) | 0.089225 / 0.038508 (0.050717) | 0.053157 / 0.023109 (0.030048) | 0.431949 / 0.275898 (0.156051) | 0.478128 / 0.323480 (0.154648) | 0.006181 / 0.007986 (-0.001804) | 0.003387 / 0.004328 (-0.000941) | 0.083741 / 0.004250 (0.079490) | 0.071610 / 0.037052 (0.034557) | 0.414698 / 0.258489 (0.156209) | 0.484422 / 0.293841 (0.190581) | 0.034988 / 0.128546 (-0.093558) | 0.009831 / 0.075646 (-0.065816) | 0.089644 / 0.419271 (-0.329628) | 0.057053 / 0.043533 (0.013520) | 0.413144 / 0.255139 (0.158005) | 0.445464 / 0.283200 (0.162264) | 0.026109 / 0.141683 (-0.115574) | 1.842899 / 1.452155 (0.390745) | 1.923774 / 1.492716 (0.431057) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245051 / 0.018006 (0.227045) | 0.460444 / 0.000490 (0.459954) | 0.000444 / 0.000200 (0.000244) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034835 / 0.037411 (-0.002577) | 0.130078 / 0.014526 (0.115553) | 0.147012 / 0.176557 (-0.029544) | 0.203097 / 0.737135 (-0.534038) | 0.149636 / 0.296338 (-0.146702) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.521664 / 0.215209 (0.306455) | 5.283865 / 2.077655 (3.206210) | 2.456701 / 1.504120 (0.952581) | 2.266059 / 1.541195 (0.724864) | 2.295387 / 1.468490 (0.826897) | 0.613200 / 4.584777 (-3.971577) | 4.526107 / 3.745712 (0.780394) | 2.047327 / 5.269862 (-3.222535) | 1.261063 / 4.565676 (-3.304614) | 0.070402 / 0.424275 (-0.353873) | 0.014128 / 0.007607 (0.006521) | 0.620929 / 0.226044 (0.394884) | 6.109127 / 2.268929 (3.840198) | 3.081406 / 55.444624 (-52.363218) | 2.658224 / 6.876477 (-4.218253) | 2.671974 / 2.142072 (0.529902) | 0.744081 / 4.805227 (-4.061146) | 0.161498 / 6.500664 (-6.339166) | 0.075148 / 0.075469 (-0.000321) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.585640 / 1.841788 (-0.256148) | 17.884321 / 8.074308 (9.810013) | 15.938937 / 10.191392 (5.747545) | 0.220818 / 0.680424 (-0.459605) | 0.021452 / 0.534201 (-0.512749) | 0.499747 / 0.579283 (-0.079536) | 0.512318 / 0.434364 (0.077954) | 0.562853 / 0.540337 (0.022515) | 0.678512 / 1.386936 (-0.708424) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2720/comments | https://api.github.com/repos/huggingface/datasets/issues/2720/events | https://github.com/huggingface/datasets/pull/2720 | 954,024,426 | MDExOlB1bGxSZXF1ZXN0Njk3OTgxNjMx | 2,720 | fix: 🐛 fix two typos | [] | closed | false | null | 0 | 2021-07-27T15:50:17Z | 2021-07-27T18:38:17Z | 2021-07-27T18:38:16Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2720/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2720.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2720",
"merged_at": "2021-07-27T18:38:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2720.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2720"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/3218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3218/comments | https://api.github.com/repos/huggingface/datasets/issues/3218/events | https://github.com/huggingface/datasets/pull/3218 | 1,045,032,313 | PR_kwDODunzps4uG2UA | 3,218 | Fix code quality in riddle_sense dataset | [] | closed | false | null | 0 | 2021-11-04T17:43:20Z | 2021-11-04T17:50:03Z | 2021-11-04T17:50:02Z | null | Fix trailing whitespace.
Fix #3217. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3218/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3218/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3218.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3218",
"merged_at": "2021-11-04T17:50:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3218.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3218"
} | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.