url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2585/comments | https://api.github.com/repos/huggingface/datasets/issues/2585/events | https://github.com/huggingface/datasets/issues/2585 | 936,484,419 | MDU6SXNzdWU5MzY0ODQ0MTk= | 2,585 | sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-07-04T15:39:49Z | 2021-07-07T13:18:51Z | 2021-07-07T13:18:51Z | null | ## Describe the bug
The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start'].
For example:
id = '56d1f453e7d4791d009025bd'
answers = {'text': ['Pure Land'], 'answer_start': [146]}
However the actual text in context at location 146 is 'ure Land,'
Which is an off-by-one error from the correct answer.
## Steps to reproduce the bug
```python
import datasets
def check_context_answer_alignment(example):
for a_idx in range(len(example['answers']['text'])):
# check raw dataset for answer consistency between context and answer
answer_text = example['answers']['text'][a_idx]
a_st_idx = example['answers']['answer_start'][a_idx]
a_end_idx = a_st_idx + len(example['answers']['text'][a_idx])
answer_text_from_context = example['context'][a_st_idx:a_end_idx]
if answer_text != answer_text_from_context:
#print(example['id'])
return False
return True
dataset = datasets.load_dataset('squad_v2', split='train', keep_in_memory=True)
start_len = len(dataset)
dataset = dataset.filter(check_context_answer_alignment,
num_proc=1,
keep_in_memory=True)
end_len = len(dataset)
print('{} instances contain mis-alignment between the answer text and answer index.'.format(start_len - end_len))
```
## Expected results
This code should result in 0 rows being filtered out from the dataset.
## Actual results
This filter command results in 258 rows being flagged as containing a discrepancy between the text contained within answers['text'] and the text in example['context'] at the answers['answer_start'] location.
This code will reproduce the problem and produce the following count:
"258 instances contain mis-alignment between the answer text and answer index."
## Environment info
Steps to rebuilt the Conda environment:
```
# create a virtual environment to stuff all these packages into
conda create -n round8 python=3.8 -y
# activate the virtual environment
conda activate round8
# install pytorch (best done through conda to handle cuda dependencies)
conda install pytorch torchvision torchtext cudatoolkit=11.1 -c pytorch-lts -c nvidia
pip install jsonpickle transformers datasets matplotlib
```
OS: Ubuntu 20.04
Python 3.8
Result of `conda env export`:
```
name: round8
channels:
- pytorch-lts
- nvidia
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=4.5=1_gnu
- blas=1.0=mkl
- brotlipy=0.7.0=py38h27cfd23_1003
- bzip2=1.0.8=h7b6447c_0
- ca-certificates=2021.5.25=h06a4308_1
- certifi=2021.5.30=py38h06a4308_0
- cffi=1.14.5=py38h261ae71_0
- chardet=4.0.0=py38h06a4308_1003
- cryptography=3.4.7=py38hd23ed53_0
- cudatoolkit=11.1.74=h6bb024c_0
- ffmpeg=4.2.2=h20bf706_0
- freetype=2.10.4=h5ab3b9f_0
- gmp=6.2.1=h2531618_2
- gnutls=3.6.15=he1e5248_0
- idna=2.10=pyhd3eb1b0_0
- intel-openmp=2021.2.0=h06a4308_610
- jpeg=9b=h024ee3a_2
- lame=3.100=h7b6447c_0
- lcms2=2.12=h3be6417_0
- ld_impl_linux-64=2.35.1=h7274673_9
- libffi=3.3=he6710b0_2
- libgcc-ng=9.3.0=h5101ec6_17
- libgomp=9.3.0=h5101ec6_17
- libidn2=2.3.1=h27cfd23_0
- libopus=1.3.1=h7b6447c_0
- libpng=1.6.37=hbc83047_0
- libstdcxx-ng=9.3.0=hd4cf53a_17
- libtasn1=4.16.0=h27cfd23_0
- libtiff=4.2.0=h85742a9_0
- libunistring=0.9.10=h27cfd23_0
- libuv=1.40.0=h7b6447c_0
- libvpx=1.7.0=h439df22_0
- libwebp-base=1.2.0=h27cfd23_0
- lz4-c=1.9.3=h2531618_0
- mkl=2021.2.0=h06a4308_296
- mkl-service=2.3.0=py38h27cfd23_1
- mkl_fft=1.3.0=py38h42c9631_2
- mkl_random=1.2.1=py38ha9443f7_2
- ncurses=6.2=he6710b0_1
- nettle=3.7.3=hbbd107a_1
- ninja=1.10.2=hff7bd54_1
- numpy=1.20.2=py38h2d18471_0
- numpy-base=1.20.2=py38hfae3a4d_0
- olefile=0.46=py_0
- openh264=2.1.0=hd408876_0
- openssl=1.1.1k=h27cfd23_0
- pillow=8.2.0=py38he98fc37_0
- pip=21.1.2=py38h06a4308_0
- pycparser=2.20=py_2
- pyopenssl=20.0.1=pyhd3eb1b0_1
- pysocks=1.7.1=py38h06a4308_0
- python=3.8.10=h12debd9_8
- pytorch=1.8.1=py3.8_cuda11.1_cudnn8.0.5_0
- readline=8.1=h27cfd23_0
- requests=2.25.1=pyhd3eb1b0_0
- setuptools=52.0.0=py38h06a4308_0
- six=1.16.0=pyhd3eb1b0_0
- sqlite=3.35.4=hdfb4753_0
- tk=8.6.10=hbc83047_0
- torchtext=0.9.1=py38
- torchvision=0.9.1=py38_cu111
- typing_extensions=3.7.4.3=pyha847dfd_0
- urllib3=1.26.4=pyhd3eb1b0_0
- wheel=0.36.2=pyhd3eb1b0_0
- x264=1!157.20191217=h7b6447c_0
- xz=5.2.5=h7b6447c_0
- zlib=1.2.11=h7b6447c_3
- zstd=1.4.9=haebb681_0
- pip:
- click==8.0.1
- cycler==0.10.0
- datasets==1.8.0
- dill==0.3.4
- filelock==3.0.12
- fsspec==2021.6.0
- huggingface-hub==0.0.8
- joblib==1.0.1
- jsonpickle==2.0.0
- kiwisolver==1.3.1
- matplotlib==3.4.2
- multiprocess==0.70.12.2
- packaging==20.9
- pandas==1.2.4
- pyarrow==3.0.0
- pyparsing==2.4.7
- python-dateutil==2.8.1
- pytz==2021.1
- regex==2021.4.4
- sacremoses==0.0.45
- tokenizers==0.10.3
- tqdm==4.49.0
- transformers==4.6.1
- xxhash==2.0.2
prefix: /home/mmajurski/anaconda3/envs/round8
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2585/timeline | null | completed | null | null | false | [
"Hi @mmajurski, thanks for reporting this issue.\r\n\r\nIndeed this misalignment arises because the source dataset context field contains leading blank spaces (and these are counted within the answer_start), while our datasets loading script removes these leading blank spaces.\r\n\r\nI'm going to fix our script so that all leading blank spaces in the source dataset are kept, and there is no misalignment between the answer text and the answer_start within the context.",
"If you are going to be altering the data cleaning from the source Squad dataset, here is one thing to consider.\r\nThere are occasional double spaces separating words which it might be nice to get rid of. \r\n\r\nEither way, thank you."
] |
https://api.github.com/repos/huggingface/datasets/issues/925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/925/comments | https://api.github.com/repos/huggingface/datasets/issues/925/events | https://github.com/huggingface/datasets/pull/925 | 753,672,661 | MDExOlB1bGxSZXF1ZXN0NTI5NzA1MzM4 | 925 | Add Turku NLP Corpus for Finnish NER | [] | closed | false | null | 1 | 2020-11-30T17:40:19Z | 2020-12-03T14:07:11Z | 2020-12-03T14:07:10Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/925/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/925/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/925.diff",
"html_url": "https://github.com/huggingface/datasets/pull/925",
"merged_at": "2020-12-03T14:07:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/925.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/925"
} | true | [
"> Did you generate the dummy data with the cli or manually ?\r\n\r\nIt was generated by the cli. Do you want me to make it smaller keep it like this?\r\n\r\n"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/4266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4266/comments | https://api.github.com/repos/huggingface/datasets/issues/4266/events | https://github.com/huggingface/datasets/pull/4266 | 1,223,116,436 | PR_kwDODunzps43LeXK | 4,266 | Add HF Speech Bench to Librispeech Dataset Card | [] | closed | false | null | 1 | 2022-05-02T16:59:31Z | 2022-05-05T08:47:20Z | 2022-05-05T08:40:09Z | null | Adds the HF Speech Bench to Librispeech Dataset Card in place of the Papers With Code Leaderboard. Should improve usage and visibility of this leaderboard! Wondering whether this can also be done for [Common Voice 7](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0) and [8](https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) through someone with permissions?
cc @patrickvonplaten: more leaderboard promotion! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4266/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4266/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4266.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4266",
"merged_at": "2022-05-05T08:40:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4266.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4266"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1964/comments | https://api.github.com/repos/huggingface/datasets/issues/1964/events | https://github.com/huggingface/datasets/issues/1964 | 818,624,864 | MDU6SXNzdWU4MTg2MjQ4NjQ= | 1,964 | Datasets.py function load_dataset does not match squad dataset | [] | closed | false | null | 6 | 2021-03-01T08:41:31Z | 2022-10-05T13:09:47Z | 2022-10-05T13:09:47Z | null | ### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad
```
the bug is that:
```
Downloading and preparing dataset squad/plain_text (download: 33.51 MiB, generated: 85.75 MiB, post-processed: Unknown size, total: 119.27 MiB) to /home2/zhenggo1/.cache/huggingface/datasets/squad/plain_text/1.0.0/4c81550d83a2ac7c7ce23783bd8ff36642800e6633c1f18417fb58c3ff50cdd7...
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 501, in <module>
main()
File "examples/question-answering/run_qa.py", line 217, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 633, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json']
```
And I try to find the [checksum link](https://github.com/huggingface/datasets/blob/master/datasets/squad/dataset_infos.json)
,is the problem plain_text do not have a checksum?
### 2 When I try to train lxmert,and use local dataset:
```
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --train_file $SQUAD_DIR/train-v1.1.json --validation_file $SQUAD_DIR/dev-v1.1.json --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad
```
The bug is that
```
['title', 'paragraphs']
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 501, in <module>
main()
File "examples/question-answering/run_qa.py", line 273, in main
answer_column_name = "answers" if "answers" in column_names else column_names[2]
IndexError: list index out of range
```
I print the answer_column_name and find that local squad dataset need the package datasets to preprocessing so that the code below can work:
```
if training_args.do_train:
column_names = datasets["train"].column_names
else:
column_names = datasets["validation"].column_names
print(datasets["train"].column_names)
question_column_name = "question" if "question" in column_names else column_names[0]
context_column_name = "context" if "context" in column_names else column_names[1]
answer_column_name = "answers" if "answers" in column_names else column_names[2]
```
## Please tell me how to fix the bug,thks a lot! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1964/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1964/timeline | null | completed | null | null | false | [
"Hi !\r\n\r\nTo fix 1, an you try to run this code ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"squad\", download_mode=\"force_redownload\")\r\n```\r\nMaybe the file your downloaded was corrupted, in this case redownloading this way should fix your issue 1.\r\n\r\nRegarding your 2nd point, you're right that loading the raw json this way doesn't give you a dataset with the column \"context\", \"question\" and \"answers\". Indeed the squad format is a very nested format so you have to preprocess the data. You can do it this way:\r\n```python\r\ndef process_squad(examples):\r\n \"\"\"\r\n Process a dataset in the squad format with columns \"title\" and \"paragraphs\"\r\n to return the dataset with columns \"context\", \"question\" and \"answers\".\r\n \"\"\"\r\n out = {\"context\": [], \"question\": [], \"answers\":[]} \r\n for paragraphs in examples[\"paragraphs\"]: \r\n for paragraph in paragraphs: \r\n for qa in paragraph[\"qas\"]: \r\n answers = [{\"answer_start\": answer[\"answer_start\"], \"text\": answer[\"text\"].strip()} for answer in qa[\"answers\"]] \r\n out[\"context\"].append(paragraph[\"context\"].strip()) \r\n out[\"question\"].append(qa[\"question\"].strip()) \r\n out[\"answers\"].append(answers) \r\n return out\r\n\r\ndatasets = load_dataset(extension, data_files=data_files, field=\"data\")\r\ncolumn_names = datasets[\"train\"].column_names\r\n\r\nif set(column_names) == {\"title\", \"paragraphs\"}:\r\n datasets = datasets.map(process_squad, batched=True, remove_columns=column_names)\r\n```\r\n\r\nHope that helps :)",
"Thks for quickly answering!\r\n### 1 I try the first way,but seems not work \r\n```\r\nTraceback (most recent call last):\r\n File \"examples/question-answering/run_qa.py\", line 503, in <module>\r\n main()\r\n File \"examples/question-answering/run_qa.py\", line 218, in main\r\n datasets = load_dataset(data_args.dataset_name, download_mode=\"force_redownload\")\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/load.py\", line 746, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py\", line 573, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py\", line 633, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 39, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json']\r\n```\r\n### 2 I try the second way,and run the examples/question-answering/run_qa.py,it lead to another bug orz..\r\n```\r\nTraceback (most recent call last):\r\n File \"examples/question-answering/run_qa.py\", line 523, in <module>\r\n main()\r\n File \"examples/question-answering/run_qa.py\", line 379, in main\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1120, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1091, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"examples/question-answering/run_qa.py\", line 339, in prepare_train_features\r\n if len(answers[\"answer_start\"]) == 0:\r\nTypeError: list indices must be integers or slices, not str\r\n```\r\n## may be the function prepare_train_features in run_qa.py need to fix,I think is that the prep\r\n```python\r\nfor i, offsets in enumerate(offset_mapping):\r\n # We will label impossible answers with the index of the CLS token.\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n\r\n # Grab the sequence corresponding to that example (to know what is the context and what is the question).\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n\r\n # One example can give several spans, this is the index of the example containing this span of text.\r\n sample_index = sample_mapping[i]\r\n answers = examples[answer_column_name][sample_index]\r\n print(examples,answers)\r\n # If no answers are given, set the cls_index as answer.\r\n if len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start/end character index of the answer in the text.\r\n start_char = answers[\"answer_start\"][0]\r\n end_char = start_char + len(answers[\"text\"][0])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n\r\n return tokenized_examples\r\n``` ",
"## I have fixed it, @lhoestq \r\n### the first section change as you said and add [\"id\"]\r\n```python\r\ndef process_squad(examples):\r\n \"\"\"\r\n Process a dataset in the squad format with columns \"title\" and \"paragraphs\"\r\n to return the dataset with columns \"context\", \"question\" and \"answers\".\r\n \"\"\"\r\n # print(examples)\r\n out = {\"context\": [], \"question\": [], \"answers\":[],\"id\":[]} \r\n for paragraphs in examples[\"paragraphs\"]: \r\n for paragraph in paragraphs: \r\n for qa in paragraph[\"qas\"]: \r\n answers = [{\"answer_start\": answer[\"answer_start\"], \"text\": answer[\"text\"].strip()} for answer in qa[\"answers\"]] \r\n out[\"context\"].append(paragraph[\"context\"].strip()) \r\n out[\"question\"].append(qa[\"question\"].strip()) \r\n out[\"answers\"].append(answers) \r\n out[\"id\"].append(qa[\"id\"]) \r\n return out\r\ncolumn_names = datasets[\"train\"].column_names if training_args.do_train else datasets[\"validation\"].column_names\r\n# print(datasets[\"train\"].column_names)\r\nif set(column_names) == {\"title\", \"paragraphs\"}:\r\n datasets = datasets.map(process_squad, batched=True, remove_columns=column_names)\r\n# Preprocessing the datasets.\r\n# Preprocessing is slighlty different for training and evaluation.\r\nif training_args.do_train:\r\n column_names = datasets[\"train\"].column_names\r\nelse:\r\n column_names = datasets[\"validation\"].column_names\r\n# print(column_names)\r\nquestion_column_name = \"question\" if \"question\" in column_names else column_names[0]\r\ncontext_column_name = \"context\" if \"context\" in column_names else column_names[1]\r\nanswer_column_name = \"answers\" if \"answers\" in column_names else column_names[2]\r\n```\r\n### the second section\r\n```python\r\ndef prepare_train_features(examples):\r\n # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results\r\n # in one example possible giving several features when a context is long, each of those features having a\r\n # context that overlaps a bit the context of the previous feature.\r\n tokenized_examples = tokenizer(\r\n examples[question_column_name if pad_on_right else context_column_name],\r\n examples[context_column_name if pad_on_right else question_column_name],\r\n truncation=\"only_second\" if pad_on_right else \"only_first\",\r\n max_length=data_args.max_seq_length,\r\n stride=data_args.doc_stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\" if data_args.pad_to_max_length else False,\r\n )\r\n\r\n # Since one example might give us several features if it has a long context, we need a map from a feature to\r\n # its corresponding example. This key gives us just that.\r\n sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\r\n # The offset mappings will give us a map from token to character position in the original context. This will\r\n # help us compute the start_positions and end_positions.\r\n offset_mapping = tokenized_examples.pop(\"offset_mapping\")\r\n\r\n # Let's label those examples!\r\n tokenized_examples[\"start_positions\"] = []\r\n tokenized_examples[\"end_positions\"] = []\r\n\r\n for i, offsets in enumerate(offset_mapping):\r\n # We will label impossible answers with the index of the CLS token.\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n\r\n # Grab the sequence corresponding to that example (to know what is the context and what is the question).\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n\r\n # One example can give several spans, this is the index of the example containing this span of text.\r\n sample_index = sample_mapping[i]\r\n answers = examples[answer_column_name][sample_index]\r\n # print(examples,answers,offset_mapping,tokenized_examples)\r\n # If no answers are given, set the cls_index as answer.\r\n if len(answers) == 0:#len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start/end character index of the answer in the text.\r\n start_char = answers[0][\"answer_start\"]\r\n end_char = start_char + len(answers[0][\"text\"])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n return tokenized_examples\r\n```",
"I'm glad you managed to fix run_qa.py for your case :)\r\n\r\nRegarding the checksum error, I'm not able to reproduce on my side.\r\nThis errors says that the downloaded file doesn't match the expected file.\r\n\r\nCould you try running this and let me know if you get the same output as me ?\r\n```python\r\nfrom datasets.utils.info_utils import get_size_checksum_dict\r\nfrom datasets import cached_path\r\n\r\nget_size_checksum_dict(cached_path(\"https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\"))\r\n# {'num_bytes': 30288272, 'checksum': '3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955'}\r\n```",
"I run the code,and it show below:\r\n```\r\n>>> from datasets.utils.info_utils import get_size_checksum_dict\r\n>>> from datasets import cached_path\r\n>>> get_size_checksum_dict(cached_path(\"https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\"))\r\nDownloading: 30.3MB [04:13, 120kB/s]\r\n{'num_bytes': 30288272, 'checksum': '3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955'}\r\n```",
"Alright ! So in this case redownloading the file with `download_mode=\"force_redownload\"` should fix it. Can you try using `download_mode=\"force_redownload\"` again ?\r\n\r\nNot sure why it didn't work for you the first time though :/"
] |
https://api.github.com/repos/huggingface/datasets/issues/6029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6029/comments | https://api.github.com/repos/huggingface/datasets/issues/6029/events | https://github.com/huggingface/datasets/pull/6029 | 1,803,460,046 | PR_kwDODunzps5VcbPW | 6,029 | [docs] Fix link | [] | closed | false | null | 3 | 2023-07-13T17:24:12Z | 2023-07-13T17:47:41Z | 2023-07-13T17:38:59Z | null | Fixes link to the builder classes :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6029/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6029/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6029.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6029",
"merged_at": "2023-07-13T17:38:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6029.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6029"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007039 / 0.011353 (-0.004314) | 0.004175 / 0.011008 (-0.006833) | 0.085426 / 0.038508 (0.046918) | 0.079818 / 0.023109 (0.056709) | 0.321924 / 0.275898 (0.046026) | 0.345482 / 0.323480 (0.022002) | 0.005510 / 0.007986 (-0.002475) | 0.003452 / 0.004328 (-0.000877) | 0.065158 / 0.004250 (0.060907) | 0.058843 / 0.037052 (0.021791) | 0.316280 / 0.258489 (0.057791) | 0.351666 / 0.293841 (0.057825) | 0.031190 / 0.128546 (-0.097357) | 0.008500 / 0.075646 (-0.067147) | 0.289595 / 0.419271 (-0.129676) | 0.053798 / 0.043533 (0.010265) | 0.315804 / 0.255139 (0.060665) | 0.334957 / 0.283200 (0.051757) | 0.024350 / 0.141683 (-0.117332) | 1.515753 / 1.452155 (0.063599) | 1.556215 / 1.492716 (0.063499) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.210378 / 0.018006 (0.192372) | 0.469309 / 0.000490 (0.468820) | 0.002890 / 0.000200 (0.002690) | 0.000086 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030214 / 0.037411 (-0.007197) | 0.088492 / 0.014526 (0.073966) | 0.098684 / 0.176557 (-0.077873) | 0.156077 / 0.737135 (-0.581058) | 0.098814 / 0.296338 (-0.197525) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.404548 / 0.215209 (0.189339) | 4.026173 / 2.077655 (1.948518) | 2.043216 / 1.504120 (0.539096) | 1.880997 / 1.541195 (0.339802) | 1.975205 / 1.468490 (0.506715) | 0.489395 / 4.584777 (-4.095382) | 3.684097 / 3.745712 (-0.061615) | 5.126934 / 5.269862 (-0.142928) | 3.092153 / 4.565676 (-1.473524) | 0.057668 / 0.424275 (-0.366607) | 0.007372 / 0.007607 (-0.000235) | 0.479647 / 0.226044 (0.253603) | 4.780207 / 2.268929 (2.511278) | 2.533457 / 55.444624 (-52.911168) | 2.182126 / 6.876477 (-4.694351) | 2.431834 / 2.142072 (0.289761) | 0.591760 / 4.805227 (-4.213467) | 0.135450 / 6.500664 (-6.365214) | 0.063218 / 0.075469 (-0.012251) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.262053 / 1.841788 (-0.579734) | 20.246992 / 8.074308 (12.172684) | 14.638222 / 10.191392 (4.446830) | 0.150021 / 0.680424 (-0.530403) | 0.018680 / 0.534201 (-0.515521) | 0.395215 / 0.579283 (-0.184068) | 0.421270 / 0.434364 (-0.013094) | 0.458845 / 0.540337 (-0.081492) | 0.634488 / 1.386936 (-0.752448) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007080 / 0.011353 (-0.004273) | 0.004112 / 0.011008 (-0.006896) | 0.066426 / 0.038508 (0.027918) | 0.090088 / 0.023109 (0.066978) | 0.400191 / 0.275898 (0.124293) | 0.429614 / 0.323480 (0.106134) | 0.005428 / 0.007986 (-0.002558) | 0.003501 / 0.004328 (-0.000827) | 0.065056 / 0.004250 (0.060806) | 0.061643 / 0.037052 (0.024590) | 0.398619 / 0.258489 (0.140130) | 0.445497 / 0.293841 (0.151657) | 0.031703 / 0.128546 (-0.096843) | 0.008708 / 0.075646 (-0.066938) | 0.071561 / 0.419271 (-0.347711) | 0.050684 / 0.043533 (0.007151) | 0.385361 / 0.255139 (0.130222) | 0.409349 / 0.283200 (0.126149) | 0.027388 / 0.141683 (-0.114295) | 1.473021 / 1.452155 (0.020866) | 1.525246 / 1.492716 (0.032529) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237710 / 0.018006 (0.219704) | 0.468719 / 0.000490 (0.468230) | 0.000385 / 0.000200 (0.000185) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032539 / 0.037411 (-0.004872) | 0.095324 / 0.014526 (0.080798) | 0.102248 / 0.176557 (-0.074308) | 0.156096 / 0.737135 (-0.581039) | 0.103458 / 0.296338 (-0.192881) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416226 / 0.215209 (0.201017) | 4.141044 / 2.077655 (2.063389) | 2.143732 / 1.504120 (0.639612) | 2.001020 / 1.541195 (0.459825) | 2.091194 / 1.468490 (0.622704) | 0.489977 / 4.584777 (-4.094800) | 3.579615 / 3.745712 (-0.166097) | 3.438082 / 5.269862 (-1.831780) | 2.069031 / 4.565676 (-2.496645) | 0.056994 / 0.424275 (-0.367281) | 0.007362 / 0.007607 (-0.000245) | 0.493077 / 0.226044 (0.267033) | 4.922622 / 2.268929 (2.653694) | 2.627083 / 55.444624 (-52.817541) | 2.301141 / 6.876477 (-4.575336) | 2.356794 / 2.142072 (0.214722) | 0.583792 / 4.805227 (-4.221436) | 0.133707 / 6.500664 (-6.366958) | 0.062892 / 0.075469 (-0.012577) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.364908 / 1.841788 (-0.476880) | 20.641219 / 8.074308 (12.566911) | 14.848528 / 10.191392 (4.657136) | 0.174207 / 0.680424 (-0.506217) | 0.018206 / 0.534201 (-0.515995) | 0.413742 / 0.579283 (-0.165541) | 0.419940 / 0.434364 (-0.014424) | 0.458543 / 0.540337 (-0.081794) | 0.616518 / 1.386936 (-0.770418) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006875 / 0.011353 (-0.004478) | 0.003489 / 0.011008 (-0.007519) | 0.082077 / 0.038508 (0.043569) | 0.103011 / 0.023109 (0.079902) | 0.370572 / 0.275898 (0.094674) | 0.416400 / 0.323480 (0.092920) | 0.004048 / 0.007986 (-0.003938) | 0.003563 / 0.004328 (-0.000765) | 0.062666 / 0.004250 (0.058416) | 0.063664 / 0.037052 (0.026612) | 0.374206 / 0.258489 (0.115717) | 0.425590 / 0.293841 (0.131749) | 0.028174 / 0.128546 (-0.100373) | 0.007906 / 0.075646 (-0.067741) | 0.266251 / 0.419271 (-0.153020) | 0.045923 / 0.043533 (0.002390) | 0.376746 / 0.255139 (0.121607) | 0.401950 / 0.283200 (0.118750) | 0.024628 / 0.141683 (-0.117054) | 1.441903 / 1.452155 (-0.010252) | 1.537494 / 1.492716 (0.044777) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.214696 / 0.018006 (0.196690) | 0.425626 / 0.000490 (0.425137) | 0.003370 / 0.000200 (0.003170) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023133 / 0.037411 (-0.014279) | 0.072374 / 0.014526 (0.057848) | 0.081255 / 0.176557 (-0.095301) | 0.146960 / 0.737135 (-0.590175) | 0.081748 / 0.296338 (-0.214590) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.390683 / 0.215209 (0.175473) | 3.893166 / 2.077655 (1.815511) | 1.884321 / 1.504120 (0.380201) | 1.701899 / 1.541195 (0.160704) | 1.737839 / 1.468490 (0.269349) | 0.497008 / 4.584777 (-4.087769) | 3.041211 / 3.745712 (-0.704501) | 3.519947 / 5.269862 (-1.749915) | 2.015085 / 4.565676 (-2.550592) | 0.057685 / 0.424275 (-0.366590) | 0.006415 / 0.007607 (-0.001192) | 0.465565 / 0.226044 (0.239520) | 4.635224 / 2.268929 (2.366295) | 2.297941 / 55.444624 (-53.146683) | 1.946670 / 6.876477 (-4.929807) | 2.078527 / 2.142072 (-0.063546) | 0.584101 / 4.805227 (-4.221126) | 0.126488 / 6.500664 (-6.374176) | 0.060819 / 0.075469 (-0.014650) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.223400 / 1.841788 (-0.618388) | 17.960923 / 8.074308 (9.886615) | 13.187683 / 10.191392 (2.996291) | 0.129258 / 0.680424 (-0.551166) | 0.016601 / 0.534201 (-0.517600) | 0.330028 / 0.579283 (-0.249255) | 0.353861 / 0.434364 (-0.080503) | 0.376022 / 0.540337 (-0.164315) | 0.518145 / 1.386936 (-0.868791) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006015 / 0.011353 (-0.005338) | 0.003605 / 0.011008 (-0.007403) | 0.062169 / 0.038508 (0.023661) | 0.056094 / 0.023109 (0.032985) | 0.353085 / 0.275898 (0.077187) | 0.393744 / 0.323480 (0.070265) | 0.004672 / 0.007986 (-0.003313) | 0.002859 / 0.004328 (-0.001469) | 0.062992 / 0.004250 (0.058742) | 0.049767 / 0.037052 (0.012714) | 0.356850 / 0.258489 (0.098361) | 0.403731 / 0.293841 (0.109890) | 0.026664 / 0.128546 (-0.101882) | 0.008026 / 0.075646 (-0.067621) | 0.067944 / 0.419271 (-0.351327) | 0.042133 / 0.043533 (-0.001400) | 0.353865 / 0.255139 (0.098726) | 0.383461 / 0.283200 (0.100261) | 0.021250 / 0.141683 (-0.120433) | 1.428102 / 1.452155 (-0.024053) | 1.481061 / 1.492716 (-0.011655) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223552 / 0.018006 (0.205546) | 0.402390 / 0.000490 (0.401900) | 0.000721 / 0.000200 (0.000521) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025065 / 0.037411 (-0.012347) | 0.075537 / 0.014526 (0.061011) | 0.083519 / 0.176557 (-0.093037) | 0.137068 / 0.737135 (-0.600068) | 0.084165 / 0.296338 (-0.212173) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420176 / 0.215209 (0.204967) | 4.206226 / 2.077655 (2.128571) | 2.168089 / 1.504120 (0.663969) | 1.987299 / 1.541195 (0.446104) | 2.029489 / 1.468490 (0.560999) | 0.495822 / 4.584777 (-4.088955) | 3.106580 / 3.745712 (-0.639132) | 3.833215 / 5.269862 (-1.436647) | 2.450450 / 4.565676 (-2.115226) | 0.056979 / 0.424275 (-0.367296) | 0.006514 / 0.007607 (-0.001093) | 0.503646 / 0.226044 (0.277601) | 5.035035 / 2.268929 (2.766106) | 2.608245 / 55.444624 (-52.836379) | 2.245492 / 6.876477 (-4.630985) | 2.262868 / 2.142072 (0.120795) | 0.590736 / 4.805227 (-4.214491) | 0.124637 / 6.500664 (-6.376027) | 0.061442 / 0.075469 (-0.014027) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.316736 / 1.841788 (-0.525052) | 17.948635 / 8.074308 (9.874327) | 13.752442 / 10.191392 (3.561050) | 0.144107 / 0.680424 (-0.536317) | 0.017112 / 0.534201 (-0.517089) | 0.336537 / 0.579283 (-0.242746) | 0.347832 / 0.434364 (-0.086532) | 0.392944 / 0.540337 (-0.147393) | 0.534455 / 1.386936 (-0.852481) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/6056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6056/comments | https://api.github.com/repos/huggingface/datasets/issues/6056/events | https://github.com/huggingface/datasets/pull/6056 | 1,815,086,963 | PR_kwDODunzps5WD4RY | 6,056 | Implement proper checkpointing for dataset uploading with resume function that does not require remapping shards that have already been uploaded | [] | open | false | null | 3 | 2023-07-21T03:13:21Z | 2023-07-24T15:17:28Z | null | null | Context: issue #5990
In order to implement the checkpointing, I introduce a metadata folder that keeps one yaml file for each set that one is uploading. This yaml keeps track of what shards have already been uploaded, and which one the idx of the latest one was. Using this information I am then able to easily get the push_to_hub function to retrieve on demand past history of uploads and continue mapping and uploading from where it was left off. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6056/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6056/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6056.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6056",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6056.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6056"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6056). All of your documentation changes will be reflected on that endpoint.",
"@lhoestq Reading the filenames is something I tried earlier, but I decided to use the yaml direction because:\r\n\r\n1. The yaml file name is constructed to retain information about the shard_size, and total number of shards, hence ensuring that the files uploaded are not just files that have the same name but actually represent a different configuration of shard_size, and total number of shards. \r\n2. Remembering the total file size is done easily in the yaml, whereas alternatively I am not sure how one could access the file size of the uploaded files without downloading them.\r\n3. I also had an issue earlier with the hashes not being consistent with which the yaml helped -- but this is no longer an issue as I found a way around it. \r\n\r\nIf 1 and 2 can be achieved without an additional yaml, then I would be willing to make those changes. Let me know of any ideas. 1. could be done by changing the data file names, but I'd rather not do that as to prevent breaking existing datasets that try to upload updates to their data. ",
"If the file name depends on the shard's fingerprint **before** mapping then we can know if a shard has been uploaded before mapping and without requiring an extra YAML file. It should do the job imo\r\n\r\n> I also had an issue earlier with the hashes not being consistent with which the yaml helped -- but this is no longer an issue as I found a way around it.\r\n\r\nwhat was the issue ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2776/comments | https://api.github.com/repos/huggingface/datasets/issues/2776/events | https://github.com/huggingface/datasets/issues/2776 | 964,400,596 | MDU6SXNzdWU5NjQ0MDA1OTY= | 2,776 | document `config.HF_DATASETS_OFFLINE` and precedence | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 0 | 2021-08-09T21:23:17Z | 2021-08-09T21:23:17Z | null | null | https://github.com/huggingface/datasets/pull/1976 implemented `HF_DATASETS_OFFLINE`, but:
1. `config.HF_DATASETS_OFFLINE` is not documented
2. the precedence is not documented (env, config)
I'm thinking it probably should be similar to what it says https://huggingface.co/docs/datasets/loading_datasets.html#from-the-huggingface-hub about `datasets.config.IN_MEMORY_MAX_SIZE`:
Quote:
> The default in 🤗 Datasets is to memory-map the dataset on disk unless you set datasets.config.IN_MEMORY_MAX_SIZE different from 0 bytes (default). In that case, the dataset will be copied in-memory if its size is smaller than datasets.config.IN_MEMORY_MAX_SIZE bytes, and memory-mapped otherwise. This behavior can be enabled by setting either the configuration option datasets.config.IN_MEMORY_MAX_SIZE (higher precedence) or the environment variable HF_DATASETS_IN_MEMORY_MAX_SIZE (lower precedence) to nonzero.
Context: trying to use `config.HF_DATASETS_OFFLINE` here:
https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/48
but are uncertain if it's safe, since it's not documented as a public API.
Thank you!
@lhoestq, @albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2776/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2776/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2841/comments | https://api.github.com/repos/huggingface/datasets/issues/2841/events | https://github.com/huggingface/datasets/issues/2841 | 980,497,321 | MDU6SXNzdWU5ODA0OTczMjE= | 2,841 | Adding GLUECoS Hinglish and Spanglish code-switching bemchmark | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 1 | 2021-08-26T17:47:39Z | 2021-10-20T18:41:20Z | null | null | ## Adding a Dataset
- **Name:** GLUECoS
- **Description:** a Microsoft Benchmark to evaluate code-switching for only two language pairs but a variety of tasks
- **Paper:** https://aclanthology.org/2020.acl-main.329/
- **Data:** https://github.com/microsoft/GLUECoS
- **Motivation:** We currently only have [one other](https://huggingface.co/datasets/lince) dataset for code-switching
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2841/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2841/timeline | null | null | null | null | false | [
"Hi @yjernite I am interested in adding this dataset. \r\nIn the repo they have also added a code mixed MT task from English to Hinglish [here](https://github.com/microsoft/GLUECoS#code-mixed-machine-translation-task). I think this could be a good dataset addition in itself and then I can add the rest of the GLUECoS tasks as one dataset. What do you think?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2925/comments | https://api.github.com/repos/huggingface/datasets/issues/2925/events | https://github.com/huggingface/datasets/pull/2925 | 997,407,034 | PR_kwDODunzps4rzJ9s | 2,925 | Add tutorial for no-code dataset upload | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 3 | 2021-09-15T18:54:42Z | 2021-09-27T17:51:55Z | 2021-09-27T17:51:55Z | null | This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2925/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2925/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2925.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2925",
"merged_at": "2021-09-27T17:51:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2925.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2925"
} | true | [
"Cool, love it ! :)\r\n\r\nFeel free to add a paragraph saying how to load the dataset:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"stevhliu/demo\")\r\n\r\n# or to separate each csv file into several splits\r\ndata_files = {\"train\": \"train.csv\", \"test\": \"test.csv\"}\r\ndataset = load_dataset(\"stevhliu/demo\", data_files=data_files)\r\nprint(dataset[\"train\"][0])\r\n```",
"Perfect, feel free to mark this PR ready for review :)\r\n\r\ncc @albertvillanova do you have any comment ? You can check the tutorial here:\r\nhttps://47389-250213286-gh.circle-artifacts.com/0/docs/_build/html/no_code_upload.html\r\n\r\nMaybe we can just add a list of supported file types:\r\n- csv\r\n- json\r\n- json lines\r\n- text\r\n- parquet",
"I just added a mention of the login for private datasets. Don't hesitate to edit or comment.\r\n\r\nOtherwise I think it's all good, feel free to merge it @stevhliu if you don't have other changes to make :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4988/comments | https://api.github.com/repos/huggingface/datasets/issues/4988/events | https://github.com/huggingface/datasets/issues/4988 | 1,376,096,584 | I_kwDODunzps5SBZFI | 4,988 | Add `IterableDataset.from_generator` to the API | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | 3 | 2022-09-16T15:19:41Z | 2022-10-05T12:10:49Z | 2022-10-05T12:10:49Z | null | We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator.
cc @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4988/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4988/timeline | null | completed | null | null | false | [
"#take",
"Thanks @hamid-vakilzadeh ! Let us know if you have some questions or if we can help",
"Thank you! I certainly will reach out if I need any help."
] |
https://api.github.com/repos/huggingface/datasets/issues/4117 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4117/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4117/comments | https://api.github.com/repos/huggingface/datasets/issues/4117/events | https://github.com/huggingface/datasets/issues/4117 | 1,195,552,406 | I_kwDODunzps5HQq6W | 4,117 | AttributeError: module 'huggingface_hub' has no attribute 'hf_api' | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 11 | 2022-04-07T05:52:36Z | 2022-07-28T16:44:04Z | 2022-04-19T15:36:35Z | null | ## Describe the bug
Could you help me please. I got this following error.
AttributeError: module 'huggingface_hub' has no attribute 'hf_api'
## Steps to reproduce the bug
when I imported the datasets
# Sample code to reproduce the bug
from datasets import list_datasets, load_dataset, list_metrics, load_metric
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: macOS-12.3-x86_64-i386-64bit
- Python version: 3.8.9
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- Huggingface-hub: 0.5.0
- Transformers: 4.18.0
Thank you in advance. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4117/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4117/timeline | null | completed | null | null | false | [
"Hi @arymbe, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to reproduce your problem.\r\n\r\nCould you please write the complete stack trace? That way we will be able to see which package originates the exception.",
"Hello, thank you for your fast replied. this is the complete error that I got\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\n---------------------------------------------------------------------------\r\n\r\nAttributeError Traceback (most recent call last)\r\n\r\nInput In [27], in <module>\r\n----> 1 from datasets import load_dataset\r\n\r\nvenv/lib/python3.8/site-packages/datasets/__init__.py:39, in <module>\r\n 37 from .arrow_dataset import Dataset, concatenate_datasets\r\n 38 from .arrow_reader import ReadInstruction\r\n---> 39 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n 40 from .combine import interleave_datasets\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n\r\nvenv/lib/python3.8/site-packages/datasets/builder.py:40, in <module>\r\n 32 from .arrow_reader import (\r\n 33 HF_GCP_BASE_URL,\r\n 34 ArrowReader,\r\n (...)\r\n 37 ReadInstruction,\r\n 38 )\r\n 39 from .arrow_writer import ArrowWriter, BeamWriter\r\n---> 40 from .data_files import DataFilesDict, sanitize_patterns\r\n 41 from .dataset_dict import DatasetDict, IterableDatasetDict\r\n 42 from .features import Features\r\n\r\nvenv/lib/python3.8/site-packages/datasets/data_files.py:297, in <module>\r\n 292 except FileNotFoundError:\r\n 293 raise FileNotFoundError(f\"The directory at {base_path} doesn't contain any data file\") from None\r\n 296 def _resolve_single_pattern_in_dataset_repository(\r\n--> 297 dataset_info: huggingface_hub.hf_api.DatasetInfo,\r\n 298 pattern: str,\r\n 299 allowed_extensions: Optional[list] = None,\r\n 300 ) -> List[PurePath]:\r\n 301 data_files_ignore = FILES_TO_IGNORE\r\n 302 fs = HfFileSystem(repo_info=dataset_info)\r\n\r\nAttributeError: module 'huggingface_hub' has no attribute 'hf_api'",
"This is weird... It is long ago that the package `huggingface_hub` has a submodule called `hf_api`.\r\n\r\nMaybe you have a problem with your installed `huggingface_hub`...\r\n\r\nCould you please try to update it?\r\n```shell\r\npip install -U huggingface_hub\r\n```",
"Yap, I've updated several times. Then, I've tried numeral combination of datasets and huggingface_hub versions. However, I think your point is right that there is a problem with my huggingface_hub installation. I'll try another way to find the solution. I'll update it later when I get the solution. Thank you :)",
"I'm sorry I can't reproduce your problem.\r\n\r\nMaybe you could try to create a new Python virtual environment and install all dependencies there from scratch. You can use either:\r\n- Python venv: https://docs.python.org/3/library/venv.html\r\n- or conda venv (if you are using conda): https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html",
"Facing the same issue.\r\n\r\nResponse from `pip show datasets`\r\n```\r\nName: datasets\r\nVersion: 1.15.1\r\nSummary: HuggingFace community-driven open-source library of datasets\r\nHome-page: https://github.com/huggingface/datasets\r\nAuthor: HuggingFace Inc.\r\nAuthor-email: [email protected]\r\nLicense: Apache 2.0\r\nLocation: /usr/local/lib/python3.8/dist-packages\r\nRequires: aiohttp, dill, fsspec, huggingface-hub, multiprocess, numpy, packaging, pandas, pyarrow, requests, tqdm, xxhash\r\nRequired-by: lm-eval\r\n```\r\n\r\nResponse from `pip show huggingface_hub`\r\n\r\n```\r\nName: huggingface-hub\r\nVersion: 0.8.1\r\nSummary: Client library to download and publish models, datasets and other repos on the huggingface.co hub\r\nHome-page: https://github.com/huggingface/huggingface_hub\r\nAuthor: Hugging Face, Inc.\r\nAuthor-email: [email protected]\r\nLicense: Apache\r\nLocation: /usr/local/lib/python3.8/dist-packages\r\nRequires: filelock, packaging, pyyaml, requests, tqdm, typing-extensions\r\nRequired-by: datasets\r\n```\r\n\r\nresponse from `datasets-cli env`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/datasets-cli\", line 5, in <module>\r\n from datasets.commands.datasets_cli import main\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/__init__.py\", line 37, in <module>\r\n from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/builder.py\", line 44, in <module>\r\n from .data_files import DataFilesDict, _sanitize_patterns\r\n File \"/usr/local/lib/python3.8/dist-packages/datasets/data_files.py\", line 120, in <module>\r\n dataset_info: huggingface_hub.hf_api.DatasetInfo,\r\n File \"/usr/local/lib/python3.8/dist-packages/huggingface_hub/__init__.py\", line 105, in __getattr__\r\n raise AttributeError(f\"No {package_name} attribute {name}\")\r\nAttributeError: No huggingface_hub attribute hf_api\r\n```",
"A workaround: \r\nI changed lines around Line 125 in `__init__.py` of `huggingface_hub` to something like\r\n```\r\n__getattr__, __dir__, __all__ = _attach(\r\n __name__,\r\n submodules=['hf_api'],\r\n```\r\nand it works ( which gives `datasets` direct access to `huggingface_hub.hf_api` ).",
"I was getting the same issue. After trying a few versions, following combination worked for me.\r\ndataset==2.3.2\r\nhuggingface_hub==0.7.0\r\n\r\nIn another environment, I just installed latest repos from pip through `pip install -U transformers datasets tokenizers evaluate`, resulting in following versions. This also worked. Hope it helps someone. \r\n\r\ndatasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.20.1",
"For layoutlm_v3 finetune\r\ndatasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.12.5",
"(For layoutlmv3 fine-tuning) In my case, modifying `requirements.txt` as below worked.\r\n\r\n- python = 3.7\r\n\r\n```\r\ndatasets==2.3.2\r\nevaluate==0.1.2\r\nhuggingface-hub==0.8.1\r\nresponse==0.5.0\r\ntokenizers==0.10.1\r\ntransformers==4.12.5\r\nseqeval==1.2.2\r\ndeepspeed==0.5.7\r\ntensorboard==2.7.0\r\nseqeval==1.2.2\r\nsentencepiece\r\ntimm==0.4.12\r\nPillow\r\neinops\r\ntextdistance\r\nshapely\r\n```",
"> For layoutlm_v3 finetune datasets-2.3.2 evaluate-0.1.2 huggingface-hub-0.8.1 responses-0.18.0 tokenizers-0.12.1 transformers-4.12.5\r\n\r\nGOOD!! Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3943/comments | https://api.github.com/repos/huggingface/datasets/issues/3943/events | https://github.com/huggingface/datasets/pull/3943 | 1,171,185,070 | PR_kwDODunzps40ipnu | 3,943 | [Doc] Don't use v for version tags on GitHub | [] | closed | false | null | 1 | 2022-03-16T15:28:30Z | 2022-03-17T11:46:26Z | 2022-03-17T11:46:25Z | null | This removes the `v` automatically used by `doc-builder` for versions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3943/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3943/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3943.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3943",
"merged_at": "2022-03-17T11:46:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3943.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3943"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3943). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/691/comments | https://api.github.com/repos/huggingface/datasets/issues/691/events | https://github.com/huggingface/datasets/issues/691 | 712,389,499 | MDU6SXNzdWU3MTIzODk0OTk= | 691 | Add UI filter to filter datasets based on task | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 1 | 2020-10-01T00:56:18Z | 2022-02-15T10:46:50Z | 2022-02-15T10:46:50Z | null | This is great work, so huge shoutout to contributors and huggingface.
The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following tasks (non exhaustive list)
- Classification
- Multi label
- Multi class
- Q&A
- Summarization
- Translation
I believe this feature might have some value, for folks trying to find datasets for a particular task, and then testing their model capabilities.
Thank you :) | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/691/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/691/timeline | null | completed | null | null | false | [
"Already supported."
] |
https://api.github.com/repos/huggingface/datasets/issues/2850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2850/comments | https://api.github.com/repos/huggingface/datasets/issues/2850/events | https://github.com/huggingface/datasets/issues/2850 | 982,654,644 | MDU6SXNzdWU5ODI2NTQ2NDQ= | 2,850 | Wound segmentation datasets | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | open | false | null | 0 | 2021-08-30T10:44:32Z | 2021-12-08T12:02:00Z | null | null | ## Adding a Dataset
- **Name:** Wound segmentation datasets
- **Description:** annotated wound image dataset
- **Paper:** https://www.nature.com/articles/s41598-020-78799-w
- **Data:** https://github.com/uwm-bigdata/wound-segmentation
- **Motivation:** Interesting simple image dataset, useful for segmentation, with visibility due to http://www.miccai.org/special-interest-groups/challenges/ and https://fusc.grand-challenge.org/
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2850/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2850/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1912/comments | https://api.github.com/repos/huggingface/datasets/issues/1912/events | https://github.com/huggingface/datasets/pull/1912 | 812,034,140 | MDExOlB1bGxSZXF1ZXN0NTc2NDY2ODQx | 1,912 | Update: WMT - use mirror links | [] | closed | false | null | 3 | 2021-02-19T13:42:34Z | 2021-02-24T13:44:53Z | 2021-02-24T13:44:53Z | null | As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts.
Now downloading the wmt datasets is blazing fast :)
cc @stas00 @patrickvonplaten | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 4,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1912/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1912/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1912.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1912",
"merged_at": "2021-02-24T13:44:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1912.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1912"
} | true | [
"So much better - thank you for doing that, @lhoestq!",
"Also fixed the `uncorpus` urls for wmt19 ru-en and zh-en for https://github.com/huggingface/datasets/issues/1893",
"Thanks!\r\nCan this be merged sooner? \r\nI manually update it and it works well."
] |
https://api.github.com/repos/huggingface/datasets/issues/1892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1892/comments | https://api.github.com/repos/huggingface/datasets/issues/1892/events | https://github.com/huggingface/datasets/issues/1892 | 809,554,174 | MDU6SXNzdWU4MDk1NTQxNzQ= | 1,892 | request to mirror wmt datasets, as they are really slow to download | [] | closed | false | null | 6 | 2021-02-16T18:36:11Z | 2021-10-26T06:55:42Z | 2021-03-25T11:53:23Z | null | Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1892/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1892/timeline | null | completed | null | null | false | [
"Yes that would be awesome. Not only the download speeds are awful, but also some files are missing.\r\nWe list all the URLs in the datasets/wmt19/wmt_utils.py so we can make a script to download them all and host on S3.\r\nAlso I think most of the materials are under the CC BY-NC-SA 3.0 license (must double check) so it should be possible to redistribute the data with no issues.\r\n\r\ncc @patrickvonplaten who knows more about the wmt scripts",
"Yeah, the scripts are pretty ugly! A big refactor would make sense here...and I also remember that the datasets were veeery slow to download",
"I'm downloading them.\r\nI'm starting with the ones hosted on http://data.statmt.org which are the slowest ones",
"@lhoestq better to use our new git-based system than just raw S3, no? (that way we have built-in CDN etc.)",
"Closing since the urls were changed to mirror urls in #1912 ",
"Hi there! What about mirroring other datasets like [CCAligned](http://www.statmt.org/cc-aligned/) as well? All of them are really slow to download..."
] |
https://api.github.com/repos/huggingface/datasets/issues/4019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4019/comments | https://api.github.com/repos/huggingface/datasets/issues/4019/events | https://github.com/huggingface/datasets/pull/4019 | 1,180,628,293 | PR_kwDODunzps41AlFk | 4,019 | Make yelp_polarity streamable | [] | closed | false | null | 2 | 2022-03-25T10:42:51Z | 2022-03-25T15:02:19Z | 2022-03-25T14:57:16Z | null | It was using `dl_manager.download_and_extract` on a TAR archive, which is not supported in streaming mode. I replaced this by `dl_manager.iter_archive` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4019/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4019/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4019.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4019",
"merged_at": "2022-03-25T14:57:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4019.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4019"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI is failing because of the incomplete dataset card - this is unrelated to the goal of this PR so we can ignore it"
] |
https://api.github.com/repos/huggingface/datasets/issues/441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/441/comments | https://api.github.com/repos/huggingface/datasets/issues/441/events | https://github.com/huggingface/datasets/pull/441 | 666,148,413 | MDExOlB1bGxSZXF1ZXN0NDU3MDQyMjY3 | 441 | Add features parameter in load dataset | [] | closed | false | null | 2 | 2020-07-27T09:50:01Z | 2020-07-30T12:51:17Z | 2020-07-30T12:51:16Z | null | Added `features` argument in `nlp.load_dataset`.
If they don't match the data type, it raises a `ValueError`.
It's a draft PR because #440 needs to be merged first. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/441/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/441/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/441.diff",
"html_url": "https://github.com/huggingface/datasets/pull/441",
"merged_at": "2020-07-30T12:51:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/441.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/441"
} | true | [
"This one is ready for review now",
"I changed to using features only, instead of info.\r\nLet mw know if it sounds good to you now @thomwolf "
] |
https://api.github.com/repos/huggingface/datasets/issues/1309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1309/comments | https://api.github.com/repos/huggingface/datasets/issues/1309/events | https://github.com/huggingface/datasets/pull/1309 | 759,501,370 | MDExOlB1bGxSZXF1ZXN0NTM0NDk2NTYx | 1,309 | Add SAMSum Corpus dataset | [] | closed | false | null | 5 | 2020-12-08T14:40:56Z | 2020-12-14T12:32:33Z | 2020-12-14T10:20:55Z | null | Did not spent much time writing README, might update later.
Copied description and some stuff from tensorflow_datasets
https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/samsum.py | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1309/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1309/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1309.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1309",
"merged_at": "2020-12-14T10:20:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1309.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1309"
} | true | [
"also to fix the check_code_quality CI you have to remove the imports of the unused `csv` and `os`",
"@lhoestq Thanks for the review! I have done what you asked, README is also updated. 🤗 \r\nThe CI fails because of the added dependency. I have never used circleCI before, so I am curious how will you solve that?",
"I just added `py7zr` to our test dependencies",
"merging since the CI is fixed on master",
"Thanks! 🤗 "
] |
https://api.github.com/repos/huggingface/datasets/issues/574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/574/comments | https://api.github.com/repos/huggingface/datasets/issues/574/events | https://github.com/huggingface/datasets/pull/574 | 693,364,853 | MDExOlB1bGxSZXF1ZXN0NDc5ODU5NzQy | 574 | Add modules cache | [] | closed | false | null | 2 | 2020-09-04T16:30:03Z | 2020-09-22T10:27:08Z | 2020-09-07T09:01:35Z | null | As discusses in #554 , we should use a module cache directory outside of the python packages directory since we may not have write permissions.
I added a new HF_MODULES_PATH directory that is added to the python path when doing `import nlp`.
In this directory, a module `nlp_modules` is created so that datasets can be added to `nlp_modules.datasets` and metrics to `nlp_modules.metrics`. `nlp_modules` doesn't exist on Pypi.
If someone using cloudpickle still wants to have the downloaded dataset/metrics scripts to be inside the nlp directory, it is still possible to change the environment variable HF_MODULES_CACHE to be a path inside the nlp lib. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/574/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/574/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/574.diff",
"html_url": "https://github.com/huggingface/datasets/pull/574",
"merged_at": "2020-09-07T09:01:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/574.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/574"
} | true | [
"All the tests pass on my side. Not sure if it is a cache issue or a pytest issue or a circleci issue.\r\nEDIT: I have the same error on google colab. Trying to fix that",
"I think I fixed it (sorry didn't notice you were on it as well)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4763/comments | https://api.github.com/repos/huggingface/datasets/issues/4763/events | https://github.com/huggingface/datasets/pull/4763 | 1,321,295,876 | PR_kwDODunzps48RMKi | 4,763 | More rigorous shape inference in to_tf_dataset | [] | closed | false | null | 1 | 2022-07-28T18:04:15Z | 2022-09-08T19:17:54Z | 2022-09-08T19:15:41Z | null | `tf.data` needs to know the shape of tensors emitted from a `tf.data.Dataset`. Although `None` dimensions are possible, overusing them can cause problems - Keras uses the dataset tensor spec at compile-time, and so saying that a dimension is `None` when it's actually constant can hurt performance, or even cause training to fail for dimensions that are needed to determine the shape of weight tensors!
The compromise I used here was to sample several batches from the underlying dataset and apply the `collate_fn` to them, and then to see which dimensions were "empirically variable". There's an obvious problem here, though - if you sample 10 batches and they all have the same shape on a certain dimension, there's still a small chance that the 11th batch will be different, and Keras will throw an error if a dataset tries to emit a tensor whose shape doesn't match the spec.
I encountered this bug in practice once or twice for datasets that were mostly-but-not-totally constant on a given dimension, and I still don't have a perfect solution, but this PR should greatly reduce the risk. It samples many more batches, and also samples very small batches (size 2) - this increases the variability, making it more likely that a few outlier samples will be detected.
Ideally, of course, we'd determine the full output shape analytically, but that's surprisingly tricky when the `collate_fn` can be any arbitrary Python code! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4763/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4763/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4763.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4763",
"merged_at": "2022-09-08T19:15:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4763.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4763"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3219/comments | https://api.github.com/repos/huggingface/datasets/issues/3219/events | https://github.com/huggingface/datasets/issues/3219 | 1,045,095,000 | I_kwDODunzps4-SuJY | 3,219 | Eventual Invalid Token Error at setup of private datasets | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-11-04T18:50:45Z | 2021-11-08T13:23:06Z | 2021-11-08T08:59:43Z | null | ## Describe the bug
From time to time, there appear Invalid Token errors with private datasets:
- https://app.circleci.com/pipelines/github/huggingface/datasets/8520/workflows/d44629f2-4749-40f8-a657-50931d0b3434/jobs/52534
```
____________ ERROR at setup of test_load_streaming_private_dataset _____________
ValueError: Invalid token passed!
____ ERROR at setup of test_load_streaming_private_dataset_with_zipped_data ____
ValueError: Invalid token passed!
=========================== short test summary info ============================
ERROR tests/test_load.py::test_load_streaming_private_dataset - ValueError: I...
ERROR tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data
```
- https://app.circleci.com/pipelines/github/huggingface/datasets/8557/workflows/a8383181-ba6d-4487-9d0a-f750b6dcb936/jobs/52763
```
____ ERROR at setup of test_load_streaming_private_dataset_with_zipped_data ____
[gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6
hf_api = <huggingface_hub.hf_api.HfApi object at 0x7f4899bab908>
hf_token = 'vgNbyuaLNEBuGbgCEtSBCOcPjZnngJufHkTaZvHwkXKGkHpjBPwmLQuJVXRxBuaRzNlGjlMpYRPbthfHPFWXaaEDTLiqTTecYENxukRYVAAdpeApIUPxcgsowadkTkPj'
zip_csv_path = PosixPath('/tmp/pytest-of-circleci/pytest-0/popen-gw1/data16/dataset.csv.zip')
@pytest.fixture(scope="session")
def hf_private_dataset_repo_zipped_txt_data_(hf_api: HfApi, hf_token, zip_csv_path):
repo_name = "repo_zipped_txt_data-{}".format(int(time.time() * 10e3))
hf_api.create_repo(token=hf_token, name=repo_name, repo_type="dataset", private=True)
repo_id = f"{USER}/{repo_name}"
hf_api.upload_file(
token=hf_token,
path_or_fileobj=str(zip_csv_path),
path_in_repo="data.zip",
repo_id=repo_id,
> repo_type="dataset",
)
tests/hub_fixtures.py:68:
...
ValueError: Invalid token passed!
=========================== short test summary info ============================
ERROR tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3219/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3860/comments | https://api.github.com/repos/huggingface/datasets/issues/3860/events | https://github.com/huggingface/datasets/pull/3860 | 1,162,623,329 | PR_kwDODunzps40GpzZ | 3,860 | Small doc fixes | [] | closed | false | null | 2 | 2022-03-08T12:55:39Z | 2022-03-08T17:37:13Z | 2022-03-08T17:37:13Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3860/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3860/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3860.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3860",
"merged_at": "2022-03-08T17:37:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3860.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3860"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3860). All of your documentation changes will be reflected on that endpoint.",
"There are still some `.. code-block:: python` (e.g. see [this](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes#datasets.Dataset.align_labels_with_mapping)) directives in our codebase, so maybe we can remove those as well as part of this PR."
] |
https://api.github.com/repos/huggingface/datasets/issues/4919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4919/comments | https://api.github.com/repos/huggingface/datasets/issues/4919/events | https://github.com/huggingface/datasets/pull/4919 | 1,357,441,599 | PR_kwDODunzps4-IxDZ | 4,919 | feat: improve error message on Keys mismatch. closes #4917 | [] | closed | false | null | 2 | 2022-08-31T14:41:36Z | 2022-09-05T08:46:01Z | 2022-09-05T08:43:33Z | null | Hi @lhoestq what do you think?
Let me give you a code sample:
```py
>>> import datasets
>>> foo = datasets.Dataset.from_dict({'foo':[0,1], 'bar':[2,3]})
>>> foo.save_to_disk('foo')
# edit foo/dataset_info.json e.g. rename the 'foo' feature to 'baz'
>>> datasets.load_from_disk('foo')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-4863e606b330> in <module>
----> 1 datasets.load_from_disk('foo')
~/code/datasets/src/datasets/load.py in load_from_disk(dataset_path, fs, keep_in_memory)
1851 raise FileNotFoundError(f"Directory {dataset_path} not found")
1852 if fs.isfile(Path(dest_dataset_path, config.DATASET_INFO_FILENAME).as_posix()):
-> 1853 return Dataset.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)
1854 elif fs.isfile(Path(dest_dataset_path, config.DATASETDICT_JSON_FILENAME).as_posix()):
1855 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)
~/code/datasets/src/datasets/arrow_dataset.py in load_from_disk(dataset_path, fs, keep_in_memory)
1230 info=dataset_info,
1231 split=split,
-> 1232 fingerprint=state["_fingerprint"],
1233 )
1234
~/code/datasets/src/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)
687 self.info.features = inferred_features
688 else: # make sure the nested columns are in the right order
--> 689 self.info.features = self.info.features.reorder_fields_as(inferred_features)
690
691 # Infer fingerprint if None
~/code/datasets/src/datasets/features/features.py in reorder_fields_as(self, other)
1771 return source
1772
-> 1773 return Features(recursive_reorder(self, other))
1774
1775 def flatten(self, max_depth=16) -> "Features":
~/code/datasets/src/datasets/features/features.py in recursive_reorder(source, target, stack)
1760 f"{source.keys()-target.keys()} are missing from dataset.arrow "
1761 f"and {target.keys()-source.keys()} are missing from dataset_info.json"+stack_position)
-> 1762 raise ValueError(message)
1763 return {key: recursive_reorder(source[key], target[key], stack + f".{key}") for key in target}
1764 elif isinstance(source, list):
ValueError: Keys mismatch: between {'baz': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (dataset_info.json) and {'foo': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (inferred from dataset.arrow).
{'baz'} are missing from dataset.arrow and {'foo'} are missing from dataset_info.json
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4919/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4919/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4919.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4919",
"merged_at": "2022-09-05T08:43:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4919.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4919"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"We are having an unrelated issue that makes several tests fail. We are working on that. Once fixed, you will be able to merge the main branch into this, so that you get the fix and the tests pass..."
] |
https://api.github.com/repos/huggingface/datasets/issues/1475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1475/comments | https://api.github.com/repos/huggingface/datasets/issues/1475/events | https://github.com/huggingface/datasets/pull/1475 | 762,187,000 | MDExOlB1bGxSZXF1ZXN0NTM2NzYxMDQz | 1,475 | Fix XML iterparse in opus_dogc dataset | [] | closed | false | null | 0 | 2020-12-11T10:08:18Z | 2020-12-17T11:28:47Z | 2020-12-17T11:28:46Z | null | I forgot to add `elem.clear()` to clear the element from memory. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1475/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1475/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1475.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1475",
"merged_at": "2020-12-17T11:28:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1475.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1475"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4263/comments | https://api.github.com/repos/huggingface/datasets/issues/4263/events | https://github.com/huggingface/datasets/pull/4263 | 1,222,723,083 | PR_kwDODunzps43KLnD | 4,263 | Rename imagenet2012 -> imagenet-1k | [] | closed | false | null | 4 | 2022-05-02T10:26:21Z | 2022-05-02T17:50:46Z | 2022-05-02T16:32:57Z | null | On the Hugging Face Hub, users refer to imagenet2012 (from #4178 ) as imagenet-1k in their model tags.
To correctly link models to imagenet, we should rename this dataset `imagenet-1k`.
Later we can add `imagenet-21k` as a new dataset if we want.
Once this one is merged we can delete the `imagenet2012` dataset repository on the Hub.
EDIT: to complete the rationale on why we should name it `imagenet-1k`:
If users specifically added the tag `imagenet-1k` , then it could be for two reasons (not sure which one is predominant), either they
- wanted to make it explicit that it’s not 21k -> the distinction is important for the community
- or they have been following this convention from other models -> the convention implicitly exists already | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4263/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4263/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4263.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4263",
"merged_at": "2022-05-02T16:32:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4263.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4263"
} | true | [
"> Later we can add imagenet-21k as a new dataset if we want.\r\n\r\nisn't it what models refer to as `imagenet` already?",
"> isn't it what models refer to as imagenet already?\r\n\r\nI wasn't sure, but it looks like it indeed. Therefore having a dataset `imagenet` for ImageNet 21k makes sense actually.\r\n\r\nEDIT: actually not all `imagenet` tag refer to ImageNet 21k - we will need to correct some of them",
"_The documentation is not available anymore as the PR was closed or merged._",
"should we remove the repo mirror on the hub side or will you do it?"
] |
https://api.github.com/repos/huggingface/datasets/issues/3400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3400/comments | https://api.github.com/repos/huggingface/datasets/issues/3400/events | https://github.com/huggingface/datasets/issues/3400 | 1,073,600,382 | I_kwDODunzps4__dd- | 3,400 | Improve Wikipedia loading script | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 2 | 2021-12-07T17:29:25Z | 2022-03-22T16:52:28Z | 2022-03-22T16:52:28Z | null | As reported by @geohci, the "wikipedia" processing/loading script could be improved by some additional small suggested processing functions:
- _extract_content(filepath):
- Replace .startswith("#redirect") with more structured approach: if elem.find(f"./{namespace}redirect") is None: continue
- _parse_and_clean_wikicode(raw_content, parser):
- Remove rm_template from cleaning -- this is redundant with .strip_code() from mwparserformhell
- Build a language-specific list of namespace prefixes to filter out per below get_namespace_prefixes
- Optional: strip prefixes like categories -- e.g., Category:Towns in Tianjin becomes Towns in Tianjin
- Optional: strip magic words
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3400/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3400/timeline | null | completed | null | null | false | [
"Thanks! See https://public.paws.wmcloud.org/User:Isaac_(WMF)/HuggingFace%20Wikipedia%20Processing.ipynb for more implementation details / some data around the overhead induced by adding the extra preprocessing steps (stripping link prefixes and magic words)",
"Closed by:\r\n- #3435"
] |
https://api.github.com/repos/huggingface/datasets/issues/1056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1056/comments | https://api.github.com/repos/huggingface/datasets/issues/1056/events | https://github.com/huggingface/datasets/pull/1056 | 756,309,828 | MDExOlB1bGxSZXF1ZXN0NTMxODc1MjA2 | 1,056 | Add deal_or_no_dialog | [] | closed | false | null | 0 | 2020-12-03T15:38:07Z | 2020-12-03T18:13:45Z | 2020-12-03T18:13:45Z | null | Add deal_or_no_dialog Dataset
github: https://github.com/facebookresearch/end-to-end-negotiator
Paper: [Deal or No Deal? End-to-End Learning for Negotiation Dialogues](https://arxiv.org/abs/1706.05125) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1056/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1056/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1056.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1056",
"merged_at": "2020-12-03T18:13:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1056.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1056"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/569/comments | https://api.github.com/repos/huggingface/datasets/issues/569/events | https://github.com/huggingface/datasets/pull/569 | 691,832,720 | MDExOlB1bGxSZXF1ZXN0NDc4NTE2Mzc2 | 569 | Revert "add reuters21578 dataset" | [] | closed | false | null | 0 | 2020-09-03T10:06:16Z | 2020-09-03T10:07:13Z | 2020-09-03T10:07:12Z | null | Reverts huggingface/nlp#471 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/569/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/569/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/569.diff",
"html_url": "https://github.com/huggingface/datasets/pull/569",
"merged_at": "2020-09-03T10:07:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/569.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/569"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5085/comments | https://api.github.com/repos/huggingface/datasets/issues/5085/events | https://github.com/huggingface/datasets/issues/5085 | 1,400,113,569 | I_kwDODunzps5TdAmh | 5,085 | Filtering on an empty dataset returns a corrupted dataset. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "DF8D62",
"default": false,
"description": "",
"id": 4614514401,
"name": "hacktoberfest",
"node_id": "LA_kwDODunzps8AAAABEwvm4Q",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hacktoberfest"
}
] | closed | false | null | 3 | 2022-10-06T18:18:49Z | 2022-10-07T19:06:02Z | 2022-10-07T18:40:26Z | null | ## Describe the bug
When filtering a dataset twice, where the first result is an empty dataset, the second dataset seems corrupted.
## Steps to reproduce the bug
```python
datasets = load_dataset("glue", "sst2")
dataset_split = datasets['validation']
ds_filter_1 = dataset_split.filter(lambda x: False) # Some filtering condition that leads to an empty dataset
assert ds_filter_1.num_rows == 0
sentences = ds_filter_1['sentence']
assert len(sentences) == 0
ds_filter_2 = ds_filter_1.filter(lambda x: False) # Some other filtering condition
assert ds_filter_2.num_rows == 0
assert 'sentence' in ds_filter_2.column_names
sentences = ds_filter_2['sentence']
```
## Expected results
The last line should be returning an empty list, same as 4 lines above.
## Actual results
The last line currently raises `IndexError: index out of bounds`.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-11.6.6-x86_64-i386-64bit
- Python version: 3.9.11
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5085/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5085/timeline | null | completed | null | null | false | [
"~~It seems like #5043 fix (merged recently) is the root cause of such behaviour. When we empty indices mapping (because the dataset length equals to zero), we can no longer get column item like: `ds_filter_2['sentence']` which uses\r\n`ds_filter_1._indices.column(0)`~~\r\n\r\n**UPDATE:**\r\nEmpty datasets are returned without going through partial function on `map` method, which will not work to get indices for `filter`: we need to run `get_indices_from_mask_function` partial function on the dataset to get output = `{\"indices\": []}`. But this is complicated since functions used in args, in particular `get_indices_from_mask_function`, do not support empty datasets.\r\nWe can just handle empty datasets aside on filter method.",
"#self-assign",
"Thank you for solving this amazingly quickly!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4244/comments | https://api.github.com/repos/huggingface/datasets/issues/4244/events | https://github.com/huggingface/datasets/pull/4244 | 1,217,732,221 | PR_kwDODunzps425Po6 | 4,244 | task id update | [] | closed | false | null | 2 | 2022-04-27T18:28:14Z | 2022-05-04T10:43:53Z | 2022-05-04T10:36:37Z | null | changed multi input text classification as task id instead of category | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4244/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4244/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4244.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4244",
"merged_at": "2022-05-04T10:36:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4244.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4244"
} | true | [
"Reverted the multi-input-text-classification tag from task_categories and added it as task_ids @lhoestq ",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/644 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/644/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/644/comments | https://api.github.com/repos/huggingface/datasets/issues/644/events | https://github.com/huggingface/datasets/pull/644 | 704,534,501 | MDExOlB1bGxSZXF1ZXN0NDg5NDQzMTk1 | 644 | Better windows support | [] | closed | false | null | 1 | 2020-09-18T17:17:36Z | 2020-09-25T14:02:30Z | 2020-09-25T14:02:28Z | null | There are a few differences in the behavior of python and pyarrow on windows.
For example there are restrictions when accessing/deleting files that are open
Fix #590 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/644/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/644/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/644.diff",
"html_url": "https://github.com/huggingface/datasets/pull/644",
"merged_at": "2020-09-25T14:02:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/644.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/644"
} | true | [
"This PR is ready :)\r\nIt brings official support for windows.\r\n\r\nSome tests `AWSDatasetTest` are failing.\r\nThis is because I had to fix a few datasets that were not compatible with windows.\r\nThese test will pass once they got merged on master :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3254/comments | https://api.github.com/repos/huggingface/datasets/issues/3254/events | https://github.com/huggingface/datasets/pull/3254 | 1,051,351,172 | PR_kwDODunzps4ubPwR | 3,254 | Update xcopa dataset (fix checksum issues + add translated data) | [] | closed | false | null | 1 | 2021-11-11T20:51:33Z | 2021-11-12T10:30:58Z | 2021-11-12T10:30:57Z | null | This PR updates the checksums (as reported [here](https://discuss.huggingface.co/t/how-to-load-dataset-locally/11601/2)) of the `xcopa` dataset. Additionally, it adds new configs that hold the translated data of the original set of configs. This data was not available at the time of adding this dataset to the lib. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3254/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3254/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3254.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3254",
"merged_at": "2021-11-12T10:30:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3254.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3254"
} | true | [
"The CI failures are unrelated to the changes (missing fields in the readme and the CER metric error fixed in #3252)."
] |
https://api.github.com/repos/huggingface/datasets/issues/3588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3588/comments | https://api.github.com/repos/huggingface/datasets/issues/3588/events | https://github.com/huggingface/datasets/pull/3588 | 1,106,749,000 | PR_kwDODunzps4xMdiC | 3,588 | Update HellaSwag README.md | [] | closed | false | null | 0 | 2022-01-18T10:46:15Z | 2022-01-20T16:57:43Z | 2022-01-20T16:57:43Z | null | Adding information from the git repo and paper that were missing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3588/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3588.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3588",
"merged_at": "2022-01-20T16:57:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3588.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3588"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/903/comments | https://api.github.com/repos/huggingface/datasets/issues/903/events | https://github.com/huggingface/datasets/pull/903 | 752,360,614 | MDExOlB1bGxSZXF1ZXN0NTI4Njk5NDQ3 | 903 | Fix URL with backslash in Windows | [] | closed | false | null | 8 | 2020-11-27T16:26:24Z | 2020-11-27T18:04:46Z | 2020-11-27T18:04:46Z | null | In Windows, `os.path.join` generates URLs containing backslashes, when the first "path" does not end with a slash.
In general, `os.path.join` should be avoided to generate URLs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/903/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/903/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/903.diff",
"html_url": "https://github.com/huggingface/datasets/pull/903",
"merged_at": "2020-11-27T18:04:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/903.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/903"
} | true | [
"@lhoestq I was indeed working on that... to make another commit on this feature branch...",
"But as you prefer... nevermind! :)",
"Ah what do you have in mind for the tests ? I was thinking of adding a check in the MockDownloadManager used for tests based on dummy data. I'm creating a PR right now, I'd be happy to have your opinion",
"Indeed I was thinking of something similar: monckeypatching the HTTP request...",
"Therefore, if you agree, I am removing all the rest of `os.path.join`, both from the code and the docs...",
"If you spot other `os.path.join` for urls in dataset scripts or metrics scripts feel free to fix them.\r\nIn the library itself (/src/datasets) it should be fine since there are tests and a windows CI, but if you have doubts of some usage of `os.path.join` somewhere, let me know.",
"Alright create the test in #905 .\r\nThe windows CI is failing for all the datasets that have bad usage of `os.path.join` for urls.\r\nThere are of course the ones you fixed in this PR (thanks again !) but I found others as well such as pg19 and blimp.\r\nYou can check the full list by looking at the CI failures of the commit 1ce3354",
"I am merging this one as well as #906 that should fix all of the datasets.\r\nThen I'll rebase #905 which adds the test that checks for bad urls and make sure it' all green now"
] |
https://api.github.com/repos/huggingface/datasets/issues/4644 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4644/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4644/comments | https://api.github.com/repos/huggingface/datasets/issues/4644/events | https://github.com/huggingface/datasets/pull/4644 | 1,296,018,052 | PR_kwDODunzps468mQb | 4,644 | [Minor fix] Typo correction | [] | closed | false | null | 1 | 2022-07-06T15:37:02Z | 2022-07-06T15:56:32Z | 2022-07-06T15:45:16Z | null | recieve -> receive | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4644/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4644/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4644.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4644",
"merged_at": "2022-07-06T15:45:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4644.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4644"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3265 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3265/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3265/comments | https://api.github.com/repos/huggingface/datasets/issues/3265/events | https://github.com/huggingface/datasets/issues/3265 | 1,052,666,558 | I_kwDODunzps4-vmq- | 3,265 | Checksum error for kilt_task_wow | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-11-13T12:04:17Z | 2021-11-16T11:23:53Z | 2021-11-16T11:21:58Z | null | ## Describe the bug
Checksum failed when downloads kilt_tasks_wow. See error output for details.
## Steps to reproduce the bug
```python
import datasets
datasets.load_datasets('kilt_tasks','wow')
```
## Expected results
Download successful
## Actual results
```
Downloading and preparing dataset kilt_tasks/wow (download: 72.07 MiB, generated: 61.82 MiB, post-processed: Unknown size, total: 133.89 MiB) to /root/.cache/huggingface/datasets/kilt_tasks/wow/1.0.0/57dc8b2431e76637e0c6ef79689ca4af61ed3a330e2e0cd62c8971465a35db3a...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 5121.25it/s]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1527.42it/s]
Traceback (most recent call last):
File "kilt_wow.py", line 30, in <module>
main()
File "kilt_wow.py", line 27, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data
dataset = self.load_dataset()
File "kilt_wow.py", line 21, in load_dataset
return datasets.load_dataset('kilt_tasks','wow')
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset
builder_instance.download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 679, in _download_and_prepare
verify_checksums(
File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['http://dl.fbaipublicfiles.com/KILT/wow-train-kilt.jsonl', 'http://dl.fbaipublicfiles.com/KILT/wow-dev-kilt.jsonl']
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.15.1
- Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3265/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3265/timeline | null | completed | null | null | false | [
"Using `dataset = load_dataset(\"kilt_tasks\", \"wow\", ignore_verifications=True)` may fix it, but I do not think it is a elegant solution.",
"Hi @slyviacassell, thanks for reporting.\r\n\r\nYes, there is an issue with the checksum verification. I'm fixing it.\r\n\r\nAnd as you pointed out, in the meantime, you can circumvent the problem by passing `ignore_verifications=True`. "
] |
https://api.github.com/repos/huggingface/datasets/issues/2976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2976/comments | https://api.github.com/repos/huggingface/datasets/issues/2976/events | https://github.com/huggingface/datasets/issues/2976 | 1,008,647,889 | I_kwDODunzps48Hr7R | 2,976 | Can't load dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-09-27T21:38:14Z | 2022-12-01T09:12:29Z | 2021-09-28T06:53:01Z | null | I'm trying to load a wikitext dataset
```
from datasets import load_dataset
raw_datasets = load_dataset("wikitext")
```
ValueError: Config name is missing.
Please pick one among the available configs: ['wikitext-103-raw-v1', 'wikitext-2-raw-v1', 'wikitext-103-v1', 'wikitext-2-v1']
Example of usage:
`load_dataset('wikitext', 'wikitext-103-raw-v1')`.
If I try
```
from datasets import load_dataset
raw_datasets = load_dataset("wikitext-2-v1")
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.12.1/datasets/wikitext-2-v1/wikitext-2-v1.py
#### Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic (colab)
- Python version: 3.7.12
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2976/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2976/timeline | null | completed | null | null | false | [
"Hi @mskovalova, \r\n\r\nSome datasets have multiple configurations. Therefore, in order to load them, you have to specify both the *dataset name* and the *configuration name*.\r\n\r\nIn the error message you got, you have a usage example:\r\n- To load the 'wikitext-103-raw-v1' configuration of the 'wikitext' dataset, you should use: \r\n ```python\r\n load_dataset('wikitext', 'wikitext-103-raw-v1')\r\n ```\r\n\r\nIn your case, if you would like to load the 'wikitext-2-v1' configuration of the 'wikitext' dataset, please use:\r\n```python\r\nraw_datasets = load_dataset(\"wikitext\", \"wikitext-2-v1\")\r\n```",
"Hi, if I want to load the dataset from local file, then how to specify the configuration name?"
] |
https://api.github.com/repos/huggingface/datasets/issues/278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/278/comments | https://api.github.com/repos/huggingface/datasets/issues/278/events | https://github.com/huggingface/datasets/issues/278 | 640,518,917 | MDU6SXNzdWU2NDA1MTg5MTc= | 278 | MemoryError when loading German Wikipedia | [] | closed | false | null | 7 | 2020-06-17T15:06:21Z | 2020-06-19T12:53:02Z | 2020-06-19T12:53:02Z | null | Hi, first off let me say thank you for all the awesome work you're doing at Hugging Face across all your projects (NLP, Transformers, Tokenizers) - they're all amazing contributions to us working with NLP models :)
I'm trying to download the German Wikipedia dataset as follows:
```
wiki = nlp.load_dataset("wikipedia", "20200501.de", split="train")
```
However, when I do so, I get the following error:
```
Downloading and preparing dataset wikipedia/20200501.de (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/ubuntu/.cache/huggingface/datasets/wikipedia/20200501.de/1.0.0...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/load.py", line 520, in load_dataset
save_infos=save_infos,
File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/builder.py", line 433, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/ubuntu/anaconda3/envs/albert/lib/python3.7/site-packages/nlp/builder.py", line 824, in _download_and_prepare
"\n\t`{}`".format(usage_example)
nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/
If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory).
Example of usage:
`load_dataset('wikipedia', '20200501.de', beam_runner='DirectRunner')`
```
So, following on from the example usage at the bottom, I tried specifying `beam_runner='DirectRunner`, however when I do this after about 20 min after the data has all downloaded, I get a `MemoryError` as warned.
This isn't an issue for the English or French Wikipedia datasets (I've tried both), as neither seem to require that `beam_runner` be specified. Can you please clarify why this is an issue for the German dataset?
My nlp version is 0.2.1.
Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/278/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/278/timeline | null | completed | null | null | false | [
"Hi !\r\n\r\nAs you noticed, \"big\" datasets like Wikipedia require apache beam to be processed.\r\nHowever users usually don't have an apache beam runtime available (spark, dataflow, etc.) so our goal for this library is to also make available processed versions of these datasets, so that users can just download and use them right away.\r\n\r\nThis is the case for english and french wikipedia right now: we've processed them ourselves and now they are available from our google storage. However we've not processed the german one (yet).",
"Hi @lhoestq \r\n\r\nThank you for your quick reply. I thought this might be the case, that the processing was done for some languages and not for others. Is there any set timeline for when other languages (German, Italian) will be processed?\r\n\r\nGiven enough memory, is it possible to process the data ourselves by specifying the `beam_runner`?",
"Adding them is definitely in our short term objectives. I'll be working on this early next week :)\r\n\r\nAlthough if you have an apache beam runtime feel free to specify the beam runner. You can find more info [here](https://github.com/huggingface/nlp/blob/master/docs/beam_dataset.md) on how to make it work on Dataflow but you can adapt it for Spark or any other beam runtime (by changing the `runner`).\r\n\r\nHowever if you don't have a beam runtime and even if you have enough memory, I discourage you to use the `DirectRunner` on the german or italian wikipedia. According to Apache Beam documentation it was made for testing purposes and therefore it is memory-inefficient.",
"German is [almost] done @gregburman",
"I added the German and the Italian Wikipedia to our google cloud storage:\r\nFirst update the `nlp` package to 0.3.0:\r\n```bash\r\npip install nlp --upgrade\r\n```\r\nand then\r\n```python\r\nfrom nlp import load_dataset\r\nwiki_de = load_dataset(\"wikipedia\", \"20200501.de\")\r\nwiki_it = load_dataset(\"wikipedia\", \"20200501.it\")\r\n```\r\nThe datasets are downloaded and directly ready to use (no processing).",
"Hi @lhoestq \r\n\r\nWow, thanks so much, that's **really** incredible! I was considering looking at creating my own Beam Dataset, as per the doc you linked, but instead opted to process the data myself using `wikiextractor`. However, now that this is available, I'll definitely switch across and use it.\r\n\r\nThanks so much for the incredible work, this really helps out our team considerably!\r\n\r\nHave a great (and well-deserved ;) weekend ahead!\r\n\r\nP.S. I'm not sure if I should close the issue here - if so I'm happy to do so.",
"Thanks for your message, glad I could help :)\r\nClosing this one."
] |
https://api.github.com/repos/huggingface/datasets/issues/653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/653/comments | https://api.github.com/repos/huggingface/datasets/issues/653/events | https://github.com/huggingface/datasets/pull/653 | 705,482,391 | MDExOlB1bGxSZXF1ZXN0NDkwMjAxOTg4 | 653 | handle data alteration when trying type | [] | closed | false | null | 0 | 2020-09-21T10:41:49Z | 2020-09-21T16:13:06Z | 2020-09-21T16:13:05Z | null | Fix #649
The bug came from the type inference that didn't handle a weird case in Pyarrow.
Indeed this code runs without error but alters the data in arrow:
```python
import pyarrow as pa
type = pa.struct({"a": pa.struct({"b": pa.string()})})
array_with_altered_data = pa.array([{"a": {"b": "foo", "c": "bar"}}] * 10, type=type)
print(array_with_altered_data[0].as_py())
# {'a': {'b': 'foo'}} -> the sub-field "c" is missing
```
(I don't know if this is intended in pyarrow tbh)
We didn't take this case into account during type inference. By default it was keeping old features and maybe alter data.
To fix that I added a line that checks that the first element of the array is not altered. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/653/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/653/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/653.diff",
"html_url": "https://github.com/huggingface/datasets/pull/653",
"merged_at": "2020-09-21T16:13:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/653.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/653"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3307/comments | https://api.github.com/repos/huggingface/datasets/issues/3307/events | https://github.com/huggingface/datasets/pull/3307 | 1,059,226,297 | PR_kwDODunzps4uzlWa | 3,307 | Add IndoNLI dataset | [] | closed | false | null | 1 | 2021-11-20T20:46:03Z | 2021-11-25T14:51:48Z | 2021-11-25T14:51:48Z | null | This PR adds IndoNLI dataset, from https://aclanthology.org/2021.emnlp-main.821/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3307/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3307/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3307.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3307",
"merged_at": "2021-11-25T14:51:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3307.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3307"
} | true | [
"@lhoestq thanks for the review! I've modified the labels to follow other NLI datasets.\r\nPlease review my change and let me know if I miss anything."
] |
https://api.github.com/repos/huggingface/datasets/issues/598 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/598/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/598/comments | https://api.github.com/repos/huggingface/datasets/issues/598/events | https://github.com/huggingface/datasets/issues/598 | 697,156,501 | MDU6SXNzdWU2OTcxNTY1MDE= | 598 | The current version of the package on github has an error when loading dataset | [] | closed | false | null | 3 | 2020-09-09T21:03:23Z | 2020-09-10T06:25:21Z | 2020-09-09T22:57:28Z | null | Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine):
To recreate the error:
First, installing nlp directly from source:
```
git clone https://github.com/huggingface/nlp.git
cd nlp
pip install -e .
```
Then run:
```
from nlp import load_dataset
dataset = load_dataset('wikitext', 'wikitext-2-v1',split = 'train')
```
will give error:
```
>>> dataset = load_dataset('wikitext', 'wikitext-2-v1',split = 'train')
Checking /home/zeyuy/.cache/huggingface/datasets/84a754b488511b109e2904672d809c041008416ae74e38f9ee0c80a8dffa1383.2e21f48d63b5572d19c97e441fbb802257cf6a4c03fbc5ed8fae3d2c2273f59e.py for additional imports.
Found main folder for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext
Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d
Found script file from https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py to /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/wikitext.py
Found dataset infos file from https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/dataset_infos.json to /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/dataset_infos.json
Found metadata file for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/wikitext.json
Loading Dataset Infos from /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d
Overwrite dataset info from restored data version.
Loading Dataset info from /home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d
Reusing dataset wikitext (/home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d)
Constructing Dataset for split train, from /home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/load.py", line 600, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 611, in as_dataset
datasets = utils.map_nested(
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 216, in map_nested
return function(data_struct)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 631, in _build_single_dataset
ds = self._as_dataset(
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 704, in _as_dataset
return Dataset(**dataset_kwargs)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/arrow_dataset.py", line 188, in __init__
self._fingerprint = generate_fingerprint(self)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 91, in generate_fingerprint
hasher.update(key)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 361, in dumps
with _no_cache_fields(obj):
File "/home/zeyuy/miniconda3/lib/python3.8/contextlib.py", line 113, in __enter__
return next(self.gen)
File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 348, in _no_cache_fields
if isinstance(obj, tr.PreTrainedTokenizerBase) and hasattr(obj, "cache") and isinstance(obj.cache, dict):
AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase'
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/598/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/598/timeline | null | completed | null | null | false | [
"Thanks for reporting !\r\nWhich version of transformers are you using ?\r\nIt looks like it doesn't have the PreTrainedTokenizerBase class",
"I was using transformer 2.9. And I switch to the latest transformer package. Everything works just fine!!\r\n\r\nThanks for helping! I should look more carefully next time. Didn't realize loading the data part requires using tokenizer.\r\n",
"Yes it shouldn’t fail with older version of transformers since this is only a special feature to make caching more efficient when using transformers for tokenization.\r\nWe’ll update this."
] |
https://api.github.com/repos/huggingface/datasets/issues/1615 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1615/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1615/comments | https://api.github.com/repos/huggingface/datasets/issues/1615/events | https://github.com/huggingface/datasets/issues/1615 | 771,641,088 | MDU6SXNzdWU3NzE2NDEwODg= | 1,615 | Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir` | [] | open | false | null | 10 | 2020-12-20T17:27:38Z | 2021-06-25T13:11:33Z | null | null | Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
dataset = datasets.load_dataset("trivia_qa", "rc", cache_dir = "./datasets")
```
## The output:
1. Download begins:
```
Downloading and preparing dataset trivia_qa/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to /cs/labs/gabis/sapirweissbuch/tr
ivia_qa/rc/1.1.0/e734e28133f4d9a353af322aa52b9f266f6f27cbf2f072690a1694e577546b0d...
Downloading: 17%|███████████████████▉ | 446M/2.67G [00:37<04:45, 7.77MB/s]
```
2. 100% is reached
3. It got stuck here for about an hour, and added additional 30G of data to "./datasets" directory. I killed the process eventually.
A similar issue can be observed in Google Colab:
https://colab.research.google.com/drive/1nn1Lw02GhfGFylzbS2j6yksGjPo7kkN-?usp=sharing
## Expected behaviour:
The dataset "TriviaQA" should be successfully downloaded.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1615/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1615/timeline | null | null | null | null | false | [
"Hi @SapirWeissbuch,\r\nWhen you are saying it freezes, at that time it is unzipping the file from the zip file it downloaded. Since it's a very heavy file it'll take some time. It was taking ~11GB after unzipping when it started reading examples for me. Hope that helps!\r\n\r\n",
"Hi @bhavitvyamalik \r\nThanks for the reply!\r\nActually I let it run for 30 minutes before I killed the process. In this time, 30GB were extracted (much more than 11GB), I checked the size of the destination directory.\r\n\r\nWhat version of Datasets are you using?\r\n",
"I'm using datasets version: 1.1.3. I think you should drop `cache_dir` and use only\r\n`dataset = datasets.load_dataset(\"trivia_qa\", \"rc\")`\r\n\r\nTried that on colab and it's working there too\r\n\r\n",
"Train, Validation, and Test splits contain 138384, 18669, and 17210 samples respectively. It takes some time to read the samples. Even in your colab notebook it was reading the samples before you killed the process. Let me know if it works now!",
"Hi, it works on colab but it still doesn't work on my computer, same problem as before - overly large and long extraction process.\r\nI have to use a custom 'cache_dir' because I don't have any space left in my home directory where it is defaulted, maybe this could be the issue?",
"I tried running this again - More details of the problem:\r\nCode:\r\n```\r\ndatasets.load_dataset(\"trivia_qa\", \"rc\", cache_dir=\"/path/to/cache\")\r\n```\r\n\r\nThe output:\r\n```\r\nDownloading and preparing dataset trivia_qa/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to path/to/cache/trivia_qa/rc/1.1.0/e734e28133f4d9a353af322aa52b9f266f6f27cbf2f072690a1694e577546b0d... \r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.67G/2.67G [03:38<00:00, 12.2MB/s]\r\n\r\n```\r\nThe process continues (no progress bar is visible).\r\nI tried `du -sh .` in `path/to/cache`, and the size keeps increasing, reached 35G before I killed the process.\r\n\r\nGoogle Colab with custom `cache_dir` has same issue.\r\nhttps://colab.research.google.com/drive/1nn1Lw02GhfGFylzbS2j6yksGjPo7kkN-?usp=sharing#scrollTo=2G2O0AeNIXan",
"1) You can clear the huggingface folder in your `.cache` directory to use default directory for datasets. Speed of extraction and loading of samples depends a lot on your machine's configurations too.\r\n\r\n2) I tried on colab `dataset = datasets.load_dataset(\"trivia_qa\", \"rc\", cache_dir = \"./datasets\")`. After memory usage reached around 42GB (starting from 32GB used already), the dataset was loaded in the memory. Even Your colab notebook shows \r\n\r\nwhich means it's loaded now.",
"Facing the same issue.\r\nI am able to download datasets without `cache_dir`, however, when I specify the `cache_dir`, the process hangs indefinitely after partial download. \r\nTried for `data = load_dataset(\"cnn_dailymail\", \"3.0.0\")`",
"Hi @ashutoshml,\r\nI tried this and it worked for me:\r\n`data = load_dataset(\"cnn_dailymail\", \"3.0.0\", cache_dir=\"./dummy\")`\r\n\r\nI'm using datasets==1.8.0. It took around 3-4 mins for dataset to unpack and start loading examples.",
"Ok. I waited for 20-30 mins, and it still is stuck.\r\nI am using datasets==1.8.0.\r\n\r\nIs there anyway to check what is happening? like a` --verbose` flag?\r\n\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/10 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/10/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/10/comments | https://api.github.com/repos/huggingface/datasets/issues/10/events | https://github.com/huggingface/datasets/pull/10 | 603,909,327 | MDExOlB1bGxSZXF1ZXN0NDA2NjAxNzQ2 | 10 | Name json file "squad.json" instead of "squad.py.json" | [] | closed | false | null | 0 | 2020-04-21T11:04:28Z | 2022-10-04T09:31:44Z | 2020-04-21T20:48:06Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/10/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/10/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/10.diff",
"html_url": "https://github.com/huggingface/datasets/pull/10",
"merged_at": "2020-04-21T20:48:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/10.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/10"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/3166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3166/comments | https://api.github.com/repos/huggingface/datasets/issues/3166/events | https://github.com/huggingface/datasets/pull/3166 | 1,036,450,283 | PR_kwDODunzps4tsVQJ | 3,166 | Deprecate prepare_module | [] | closed | false | null | 1 | 2021-10-26T15:28:24Z | 2021-11-05T09:27:37Z | 2021-11-05T09:27:36Z | null | In version 1.13, `prepare_module` was deprecated.
This PR adds a deprecation warning and removes it from all the library, using `dataset_module_factory` or `metric_module_factory` instead.
Fix #3165. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3166/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3166/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3166.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3166",
"merged_at": "2021-11-05T09:27:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3166.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3166"
} | true | [
"Sounds good, thanks !"
] |
https://api.github.com/repos/huggingface/datasets/issues/3976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3976/comments | https://api.github.com/repos/huggingface/datasets/issues/3976/events | https://github.com/huggingface/datasets/pull/3976 | 1,175,043,780 | PR_kwDODunzps40uOY6 | 3,976 | Fix main classes reference in docs | [] | closed | false | null | 3 | 2022-03-21T08:19:46Z | 2022-04-12T14:19:39Z | 2022-04-12T14:19:38Z | null | Currently the section index (on the page's right side) of the [main classes reference](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes) incorrectly displays `Tensor returned:`, this PR fixes this issue by wrapping code examples in this page with markdown code block.
There are other examples in datasets library having this issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3976/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3976/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3976.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3976",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3976.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3976"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3976). All of your documentation changes will be reflected on that endpoint.",
"Not sure why some section titles end with `[[datasets.xxx]]`, like this: https://moon-ci-docs.huggingface.co/docs/datasets/pr_3976/en/package_reference/main_classes#datasetdict[[datasets.datasetdict]]",
"Thanks ! I think this has been fixed already in https://github.com/huggingface/datasets/pull/3925 though\r\n\r\nI'm closing this one then if it's fine for you"
] |
https://api.github.com/repos/huggingface/datasets/issues/478 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/478/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/478/comments | https://api.github.com/repos/huggingface/datasets/issues/478/events | https://github.com/huggingface/datasets/issues/478 | 673,178,317 | MDU6SXNzdWU2NzMxNzgzMTc= | 478 | Export TFRecord to GCP bucket | [] | closed | false | null | 1 | 2020-08-05T01:08:32Z | 2020-08-05T01:21:37Z | 2020-08-05T01:21:36Z | null | Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')`
Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket.
`dataset.export('local.tfrecord')` works fine,
but `dataset.export('gs://my_bucket/x.tfrecord')` does not work.
There is no error message, I just can't find the file on my bucket...
---
Looking at the code, `nlp` is using `tf.data.experimental.TFRecordWriter`, while I was using `tf.io.TFRecordWriter`.
**What's the difference between those 2 ? How can I write TFRecords files directly to GCP bucket ?**
@jarednielsen @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/478/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/478/timeline | null | completed | null | null | false | [
"Nevermind, I restarted my python session and it worked fine...\r\n\r\n---\r\n\r\nI had an authentification error, and I authenticated from another terminal. After that, no more error but it was not working. Restarting the sessions makes it work :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4182/comments | https://api.github.com/repos/huggingface/datasets/issues/4182/events | https://github.com/huggingface/datasets/issues/4182 | 1,208,285,235 | I_kwDODunzps5IBPgz | 4,182 | Zenodo.org download is not responding | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 5 | 2022-04-19T12:26:57Z | 2022-04-20T07:11:05Z | 2022-04-20T07:11:05Z | null | ## Describe the bug
Source download_url from zenodo.org does not respond.
`_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"`
Other datasets also use zenodo.org to store data and they cannot be downloaded as well.
It would be better to actually use more reliable way to store original data like s3 bucket.
## Steps to reproduce the bug
```python
load_dataset("sick")
```
## Expected results
Dataset should be downloaded.
## Actual results
ConnectionError: Couldn't reach https://zenodo.org/record/2787612/files/SICK.zip?download=1 (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='zenodo.org', port=443): Read timed out. (read timeout=100)")))
## Environment info
- `datasets` version: 2.1.0
- Platform: Darwin-21.4.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4182/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4182/timeline | null | completed | null | null | false | [
"[Off topic but related: Is the uptime of S3 provably better than Zenodo's?]",
"Hi @dkajtoch, please note that at HuggingFace we are not hosting this dataset: we are just using a script to download their data file and create a dataset from it.\r\n\r\nIt was the dataset owners decision to host their data at Zenodo. You can see this on their website: https://marcobaroni.org/composes/sick.html\r\n\r\nAnd yes, you are right: Zenodo is currently having some incidents and people are reporting problems from it.\r\n\r\nOn the other hand, we could contact the data owners and propose them to host their data at our Hugging Face Hub.\r\n\r\n@julien-c I guess so.\r\n",
"Thanks @albertvillanova. I know that the problem lies in the source data. I just wanted to point out that these kind of problems are unavoidable without having one place where data sources are cached. Websites may go down or data sources may move. Having a copy in Hugging Face Hub would be a great solution. ",
"Definitely, @dkajtoch! But we have to ask permission to the data owners. And many dataset licenses directly forbid data redistribution: in those cases we are not allowed to host their data on our Hub.",
"Ahhh good point! License is the problem :("
] |
https://api.github.com/repos/huggingface/datasets/issues/5577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5577/comments | https://api.github.com/repos/huggingface/datasets/issues/5577/events | https://github.com/huggingface/datasets/issues/5577 | 1,598,587,665 | I_kwDODunzps5fSIMR | 5,577 | Cannot load `the_pile_openwebtext2` | [] | closed | false | null | 1 | 2023-02-24T13:01:48Z | 2023-02-24T14:01:09Z | 2023-02-24T14:01:09Z | null | ### Describe the bug
I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62
### Steps to reproduce the bug
```python3
from datasets import load_dataset
dataset = load_dataset("the_pile_openwebtext2")
```
### Expected behavior
load as normal.
### Environment info
- `datasets` version: 2.10.0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5577/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5577/timeline | null | completed | null | null | false | [
"Hi! I've merged a PR to use `int32` instead of `int8` for `reddit_scores`, so it should work now.\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3386/comments | https://api.github.com/repos/huggingface/datasets/issues/3386/events | https://github.com/huggingface/datasets/pull/3386 | 1,071,813,141 | PR_kwDODunzps4va7-2 | 3,386 | Fix typos in dataset cards | [] | closed | false | null | 0 | 2021-12-06T07:20:40Z | 2021-12-06T09:30:55Z | 2021-12-06T09:30:54Z | null | This PR:
- Fix typos in dataset cards
- Fix Papers With Code ID for:
- Bilingual Corpus of Arabic-English Parallel Tweets
- Tweets Hate Speech Detection
- Add pretty name tags | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3386/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3386/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3386.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3386",
"merged_at": "2021-12-06T09:30:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3386.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3386"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2537/comments | https://api.github.com/repos/huggingface/datasets/issues/2537/events | https://github.com/huggingface/datasets/pull/2537 | 927,472,659 | MDExOlB1bGxSZXF1ZXN0Njc1NjI1OTY3 | 2,537 | Add Parquet loader + from_parquet and to_parquet | [] | closed | false | null | 3 | 2021-06-22T17:28:23Z | 2021-06-30T16:31:03Z | 2021-06-30T16:30:58Z | null | Continuation of #2247
I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`.
As usual, the data are converted to arrow in a batched way to avoid loading everything in memory. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2537/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2537/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2537.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2537",
"merged_at": "2021-06-30T16:30:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2537.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2537"
} | true | [
"`pyarrow` 1.0.0 doesn't support some types in parquet, we'll have to bump its minimum version.\r\n\r\nAlso I still need to add dummy data to test the parquet builder.",
"I had to bump the minimum pyarrow version to 3.0.0 to properly support parquet.\r\n\r\nEverything is ready for review now :)\r\nI reused pretty much the same tests we had for CSV",
"Done !\r\nNow we're still allowing pyarrow>=1.0.0, but when users want to use parquet features they're asked to update to pyarrow>=3.0.0"
] |
https://api.github.com/repos/huggingface/datasets/issues/4239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4239/comments | https://api.github.com/repos/huggingface/datasets/issues/4239/events | https://github.com/huggingface/datasets/pull/4239 | 1,217,269,689 | PR_kwDODunzps423tZr | 4,239 | Small fixes in ROC AUC docs | [] | closed | false | null | 1 | 2022-04-27T12:15:50Z | 2022-05-02T13:28:57Z | 2022-05-02T13:22:03Z | null | The list of use cases did not render on GitHub with the prepended spacing.
Additionally, some typo's we're fixed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4239/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4239/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4239.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4239",
"merged_at": "2022-05-02T13:22:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4239.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4239"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1107/comments | https://api.github.com/repos/huggingface/datasets/issues/1107/events | https://github.com/huggingface/datasets/pull/1107 | 757,031,179 | MDExOlB1bGxSZXF1ZXN0NTMyNDc0MzMy | 1,107 | Add arsentd_lev dataset | [] | closed | false | null | 1 | 2020-12-04T11:31:04Z | 2020-12-05T15:38:09Z | 2020-12-05T15:38:09Z | null | Add The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV)
Paper: [ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets](https://arxiv.org/abs/1906.01830)
Homepage: http://oma-project.com/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1107/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1107/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1107.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1107",
"merged_at": "2020-12-05T15:38:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1107.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1107"
} | true | [
"thanks ! can you also regenerate the dataset_infos.json file please ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/3723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3723/comments | https://api.github.com/repos/huggingface/datasets/issues/3723/events | https://github.com/huggingface/datasets/pull/3723 | 1,138,789,493 | PR_kwDODunzps4y3RuI | 3,723 | Fix flatten of complex feature types | [] | closed | false | null | 2 | 2022-02-15T14:45:33Z | 2022-03-18T17:32:26Z | 2022-03-18T17:28:14Z | null | Fix `flatten` for the following feature types: Image/Audio, Translation, and TranslationVariableLanguages.
Inspired by `cast`/`table_cast`, I've introduced a `table_flatten` function to handle the Image/Audio types.
CC: @SBrandeis
Fix #3686.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3723/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3723/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3723.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3723",
"merged_at": "2022-03-18T17:28:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3723.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3723"
} | true | [
"Apparently the merge brought back some tests that use `flatten_()` that we removed recently",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3083/comments | https://api.github.com/repos/huggingface/datasets/issues/3083/events | https://github.com/huggingface/datasets/issues/3083 | 1,026,397,062 | I_kwDODunzps49LZOG | 3,083 | Datasets with Audio feature raise error when loaded from cache due to _resampler parameter | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-10-14T13:23:53Z | 2021-10-14T15:13:40Z | 2021-10-14T15:13:40Z | null | ## Describe the bug
As reported by @patrickvonplaten, when loaded from the cache, datasets containing the Audio feature raise TypeError.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# load first time works
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
# load from cache breaks
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean")
```
## Actual results
```
TypeError: __init__() got an unexpected keyword argument '_resampler'
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3083/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3083/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6063/comments | https://api.github.com/repos/huggingface/datasets/issues/6063/events | https://github.com/huggingface/datasets/pull/6063 | 1,818,679,485 | PR_kwDODunzps5WPtxi | 6,063 | Release: 2.14.0 | [] | closed | false | null | 4 | 2023-07-24T15:41:19Z | 2023-07-24T16:05:16Z | 2023-07-24T15:47:51Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6063/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6063/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6063.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6063",
"merged_at": "2023-07-24T15:47:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6063.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6063"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007703 / 0.011353 (-0.003650) | 0.004699 / 0.011008 (-0.006309) | 0.090195 / 0.038508 (0.051687) | 0.119165 / 0.023109 (0.096056) | 0.361435 / 0.275898 (0.085537) | 0.404429 / 0.323480 (0.080949) | 0.006172 / 0.007986 (-0.001814) | 0.003932 / 0.004328 (-0.000397) | 0.068384 / 0.004250 (0.064133) | 0.066730 / 0.037052 (0.029678) | 0.360978 / 0.258489 (0.102489) | 0.401301 / 0.293841 (0.107460) | 0.032836 / 0.128546 (-0.095710) | 0.010821 / 0.075646 (-0.064825) | 0.294526 / 0.419271 (-0.124745) | 0.068751 / 0.043533 (0.025218) | 0.368427 / 0.255139 (0.113288) | 0.376969 / 0.283200 (0.093770) | 0.040538 / 0.141683 (-0.101145) | 1.509966 / 1.452155 (0.057811) | 1.564885 / 1.492716 (0.072169) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.292243 / 0.018006 (0.274237) | 0.662067 / 0.000490 (0.661577) | 0.004966 / 0.000200 (0.004766) | 0.000103 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029050 / 0.037411 (-0.008361) | 0.099880 / 0.014526 (0.085354) | 0.109277 / 0.176557 (-0.067280) | 0.167877 / 0.737135 (-0.569258) | 0.110770 / 0.296338 (-0.185569) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395742 / 0.215209 (0.180533) | 3.944152 / 2.077655 (1.866498) | 1.875295 / 1.504120 (0.371175) | 1.705088 / 1.541195 (0.163893) | 1.884443 / 1.468490 (0.415953) | 0.497243 / 4.584777 (-4.087534) | 3.749287 / 3.745712 (0.003575) | 4.418826 / 5.269862 (-0.851035) | 2.481149 / 4.565676 (-2.084528) | 0.058260 / 0.424275 (-0.366015) | 0.007744 / 0.007607 (0.000137) | 0.472531 / 0.226044 (0.246486) | 4.716022 / 2.268929 (2.447094) | 2.480446 / 55.444624 (-52.964179) | 2.163098 / 6.876477 (-4.713379) | 2.217555 / 2.142072 (0.075482) | 0.601965 / 4.805227 (-4.203262) | 0.139364 / 6.500664 (-6.361301) | 0.067097 / 0.075469 (-0.008372) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.330537 / 1.841788 (-0.511251) | 22.176270 / 8.074308 (14.101962) | 16.224981 / 10.191392 (6.033589) | 0.173708 / 0.680424 (-0.506715) | 0.019402 / 0.534201 (-0.514799) | 0.401994 / 0.579283 (-0.177289) | 0.432597 / 0.434364 (-0.001767) | 0.489933 / 0.540337 (-0.050404) | 0.672334 / 1.386936 (-0.714602) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008622 / 0.011353 (-0.002731) | 0.004609 / 0.011008 (-0.006399) | 0.067791 / 0.038508 (0.029283) | 0.112770 / 0.023109 (0.089661) | 0.380939 / 0.275898 (0.105041) | 0.416940 / 0.323480 (0.093460) | 0.006170 / 0.007986 (-0.001815) | 0.003876 / 0.004328 (-0.000452) | 0.066227 / 0.004250 (0.061976) | 0.073132 / 0.037052 (0.036080) | 0.390120 / 0.258489 (0.131631) | 0.420893 / 0.293841 (0.127052) | 0.033235 / 0.128546 (-0.095311) | 0.009659 / 0.075646 (-0.065987) | 0.072668 / 0.419271 (-0.346604) | 0.051333 / 0.043533 (0.007801) | 0.393828 / 0.255139 (0.138689) | 0.412376 / 0.283200 (0.129176) | 0.027760 / 0.141683 (-0.113923) | 1.494369 / 1.452155 (0.042214) | 1.592862 / 1.492716 (0.100145) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.345376 / 0.018006 (0.327369) | 0.609399 / 0.000490 (0.608909) | 0.000546 / 0.000200 (0.000346) | 0.000061 / 0.000054 (0.000007) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035601 / 0.037411 (-0.001810) | 0.106527 / 0.014526 (0.092001) | 0.114388 / 0.176557 (-0.062168) | 0.175607 / 0.737135 (-0.561529) | 0.113009 / 0.296338 (-0.183329) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417237 / 0.215209 (0.202028) | 4.136329 / 2.077655 (2.058675) | 2.147134 / 1.504120 (0.643014) | 2.009501 / 1.541195 (0.468306) | 2.139499 / 1.468490 (0.671009) | 0.491593 / 4.584777 (-4.093184) | 3.766734 / 3.745712 (0.021022) | 5.652446 / 5.269862 (0.382585) | 3.021654 / 4.565676 (-1.544022) | 0.058458 / 0.424275 (-0.365817) | 0.008271 / 0.007607 (0.000664) | 0.488229 / 0.226044 (0.262184) | 4.861343 / 2.268929 (2.592415) | 2.694142 / 55.444624 (-52.750482) | 2.489130 / 6.876477 (-4.387346) | 2.679376 / 2.142072 (0.537304) | 0.589959 / 4.805227 (-4.215268) | 0.137939 / 6.500664 (-6.362725) | 0.066833 / 0.075469 (-0.008636) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.444871 / 1.841788 (-0.396916) | 22.874961 / 8.074308 (14.800653) | 15.842130 / 10.191392 (5.650738) | 0.175529 / 0.680424 (-0.504895) | 0.019024 / 0.534201 (-0.515177) | 0.406551 / 0.579283 (-0.172732) | 0.430335 / 0.434364 (-0.004029) | 0.475750 / 0.540337 (-0.064587) | 0.624836 / 1.386936 (-0.762100) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006068 / 0.011353 (-0.005285) | 0.003694 / 0.011008 (-0.007315) | 0.080321 / 0.038508 (0.041813) | 0.061738 / 0.023109 (0.038629) | 0.329675 / 0.275898 (0.053777) | 0.364008 / 0.323480 (0.040528) | 0.004722 / 0.007986 (-0.003263) | 0.002857 / 0.004328 (-0.001471) | 0.062447 / 0.004250 (0.058197) | 0.047006 / 0.037052 (0.009953) | 0.335730 / 0.258489 (0.077241) | 0.373047 / 0.293841 (0.079206) | 0.027273 / 0.128546 (-0.101274) | 0.007979 / 0.075646 (-0.067667) | 0.262693 / 0.419271 (-0.156579) | 0.045416 / 0.043533 (0.001883) | 0.340774 / 0.255139 (0.085635) | 0.359667 / 0.283200 (0.076468) | 0.020848 / 0.141683 (-0.120835) | 1.450110 / 1.452155 (-0.002045) | 1.489511 / 1.492716 (-0.003206) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.185090 / 0.018006 (0.167084) | 0.429823 / 0.000490 (0.429334) | 0.000703 / 0.000200 (0.000503) | 0.000058 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024398 / 0.037411 (-0.013013) | 0.072983 / 0.014526 (0.058457) | 0.084012 / 0.176557 (-0.092544) | 0.146160 / 0.737135 (-0.590975) | 0.084068 / 0.296338 (-0.212270) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.432204 / 0.215209 (0.216995) | 4.320593 / 2.077655 (2.242939) | 2.261260 / 1.504120 (0.757140) | 2.087148 / 1.541195 (0.545954) | 2.144520 / 1.468490 (0.676029) | 0.501477 / 4.584777 (-4.083300) | 3.119557 / 3.745712 (-0.626156) | 3.572527 / 5.269862 (-1.697335) | 2.208836 / 4.565676 (-2.356840) | 0.057232 / 0.424275 (-0.367043) | 0.006494 / 0.007607 (-0.001113) | 0.508135 / 0.226044 (0.282091) | 5.090416 / 2.268929 (2.821488) | 2.739800 / 55.444624 (-52.704824) | 2.416105 / 6.876477 (-4.460372) | 2.616037 / 2.142072 (0.473965) | 0.583730 / 4.805227 (-4.221497) | 0.124312 / 6.500664 (-6.376352) | 0.060760 / 0.075469 (-0.014709) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256097 / 1.841788 (-0.585691) | 18.326073 / 8.074308 (10.251765) | 13.859173 / 10.191392 (3.667781) | 0.143639 / 0.680424 (-0.536785) | 0.016649 / 0.534201 (-0.517552) | 0.331671 / 0.579283 (-0.247612) | 0.365370 / 0.434364 (-0.068994) | 0.392753 / 0.540337 (-0.147584) | 0.549302 / 1.386936 (-0.837634) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006054 / 0.011353 (-0.005299) | 0.003641 / 0.011008 (-0.007367) | 0.063109 / 0.038508 (0.024601) | 0.060482 / 0.023109 (0.037372) | 0.404047 / 0.275898 (0.128149) | 0.425436 / 0.323480 (0.101956) | 0.004603 / 0.007986 (-0.003382) | 0.002905 / 0.004328 (-0.001423) | 0.063207 / 0.004250 (0.058956) | 0.048248 / 0.037052 (0.011196) | 0.404325 / 0.258489 (0.145836) | 0.432652 / 0.293841 (0.138811) | 0.027630 / 0.128546 (-0.100916) | 0.008062 / 0.075646 (-0.067584) | 0.068367 / 0.419271 (-0.350905) | 0.042169 / 0.043533 (-0.001364) | 0.384903 / 0.255139 (0.129764) | 0.418617 / 0.283200 (0.135417) | 0.020767 / 0.141683 (-0.120915) | 1.463606 / 1.452155 (0.011451) | 1.512081 / 1.492716 (0.019365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.229601 / 0.018006 (0.211594) | 0.417878 / 0.000490 (0.417388) | 0.000373 / 0.000200 (0.000173) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026354 / 0.037411 (-0.011057) | 0.078100 / 0.014526 (0.063574) | 0.087122 / 0.176557 (-0.089434) | 0.140017 / 0.737135 (-0.597118) | 0.089923 / 0.296338 (-0.206415) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422405 / 0.215209 (0.207196) | 4.237383 / 2.077655 (2.159728) | 2.161104 / 1.504120 (0.656984) | 1.982337 / 1.541195 (0.441142) | 2.050216 / 1.468490 (0.581726) | 0.499281 / 4.584777 (-4.085496) | 2.996953 / 3.745712 (-0.748759) | 5.027069 / 5.269862 (-0.242792) | 2.804703 / 4.565676 (-1.760974) | 0.057707 / 0.424275 (-0.366568) | 0.006809 / 0.007607 (-0.000798) | 0.495196 / 0.226044 (0.269152) | 4.946593 / 2.268929 (2.677665) | 2.598965 / 55.444624 (-52.845660) | 2.349871 / 6.876477 (-4.526606) | 2.451665 / 2.142072 (0.309593) | 0.592314 / 4.805227 (-4.212913) | 0.125685 / 6.500664 (-6.374979) | 0.063252 / 0.075469 (-0.012217) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.325422 / 1.841788 (-0.516366) | 18.521059 / 8.074308 (10.446751) | 14.046757 / 10.191392 (3.855365) | 0.133009 / 0.680424 (-0.547415) | 0.017097 / 0.534201 (-0.517104) | 0.339804 / 0.579283 (-0.239479) | 0.345464 / 0.434364 (-0.088900) | 0.387623 / 0.540337 (-0.152714) | 0.519880 / 1.386936 (-0.867056) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008671 / 0.011353 (-0.002682) | 0.004681 / 0.011008 (-0.006327) | 0.107517 / 0.038508 (0.069008) | 0.078846 / 0.023109 (0.055737) | 0.449745 / 0.275898 (0.173847) | 0.504075 / 0.323480 (0.180596) | 0.005837 / 0.007986 (-0.002148) | 0.004031 / 0.004328 (-0.000297) | 0.092021 / 0.004250 (0.087771) | 0.065954 / 0.037052 (0.028902) | 0.442082 / 0.258489 (0.183593) | 0.529349 / 0.293841 (0.235508) | 0.052527 / 0.128546 (-0.076019) | 0.013854 / 0.075646 (-0.061792) | 0.367315 / 0.419271 (-0.051956) | 0.068731 / 0.043533 (0.025199) | 0.494733 / 0.255139 (0.239594) | 0.472801 / 0.283200 (0.189601) | 0.036791 / 0.141683 (-0.104892) | 1.877648 / 1.452155 (0.425493) | 1.928399 / 1.492716 (0.435683) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231910 / 0.018006 (0.213904) | 0.553464 / 0.000490 (0.552974) | 0.011915 / 0.000200 (0.011715) | 0.000378 / 0.000054 (0.000324) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028232 / 0.037411 (-0.009179) | 0.091441 / 0.014526 (0.076916) | 0.110394 / 0.176557 (-0.066162) | 0.187638 / 0.737135 (-0.549497) | 0.111810 / 0.296338 (-0.184529) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.599987 / 0.215209 (0.384778) | 6.008709 / 2.077655 (3.931054) | 2.518769 / 1.504120 (1.014650) | 2.197029 / 1.541195 (0.655834) | 2.217165 / 1.468490 (0.748675) | 0.894939 / 4.584777 (-3.689837) | 5.001217 / 3.745712 (1.255505) | 4.636482 / 5.269862 (-0.633379) | 3.237613 / 4.565676 (-1.328063) | 0.104227 / 0.424275 (-0.320048) | 0.008504 / 0.007607 (0.000897) | 0.750190 / 0.226044 (0.524145) | 7.514571 / 2.268929 (5.245642) | 3.358003 / 55.444624 (-52.086621) | 2.585649 / 6.876477 (-4.290827) | 2.731129 / 2.142072 (0.589056) | 1.088828 / 4.805227 (-3.716400) | 0.217308 / 6.500664 (-6.283356) | 0.076410 / 0.075469 (0.000941) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.620087 / 1.841788 (-0.221701) | 23.145743 / 8.074308 (15.071435) | 20.583403 / 10.191392 (10.392011) | 0.225467 / 0.680424 (-0.454956) | 0.029063 / 0.534201 (-0.505138) | 0.480563 / 0.579283 (-0.098720) | 0.539083 / 0.434364 (0.104719) | 0.563787 / 0.540337 (0.023449) | 0.782902 / 1.386936 (-0.604034) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010113 / 0.011353 (-0.001239) | 0.004997 / 0.011008 (-0.006011) | 0.082974 / 0.038508 (0.044466) | 0.090375 / 0.023109 (0.067266) | 0.440273 / 0.275898 (0.164375) | 0.476939 / 0.323480 (0.153459) | 0.005955 / 0.007986 (-0.002031) | 0.004375 / 0.004328 (0.000046) | 0.080459 / 0.004250 (0.076209) | 0.061787 / 0.037052 (0.024734) | 0.477211 / 0.258489 (0.218722) | 0.487164 / 0.293841 (0.193323) | 0.054198 / 0.128546 (-0.074348) | 0.013945 / 0.075646 (-0.061701) | 0.093006 / 0.419271 (-0.326266) | 0.062685 / 0.043533 (0.019152) | 0.461373 / 0.255139 (0.206234) | 0.475766 / 0.283200 (0.192567) | 0.032059 / 0.141683 (-0.109623) | 1.857989 / 1.452155 (0.405834) | 1.837993 / 1.492716 (0.345277) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.243048 / 0.018006 (0.225042) | 0.535850 / 0.000490 (0.535360) | 0.007204 / 0.000200 (0.007004) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032584 / 0.037411 (-0.004827) | 0.098151 / 0.014526 (0.083625) | 0.109691 / 0.176557 (-0.066866) | 0.172803 / 0.737135 (-0.564333) | 0.110469 / 0.296338 (-0.185869) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.635086 / 0.215209 (0.419877) | 6.500864 / 2.077655 (4.423210) | 2.996727 / 1.504120 (1.492607) | 2.537218 / 1.541195 (0.996023) | 2.572310 / 1.468490 (1.103820) | 0.870868 / 4.584777 (-3.713909) | 4.989744 / 3.745712 (1.244032) | 4.422174 / 5.269862 (-0.847687) | 2.935874 / 4.565676 (-1.629803) | 0.097118 / 0.424275 (-0.327157) | 0.009360 / 0.007607 (0.001753) | 0.790447 / 0.226044 (0.564403) | 7.859519 / 2.268929 (5.590591) | 3.975616 / 55.444624 (-51.469009) | 3.018271 / 6.876477 (-3.858206) | 3.111173 / 2.142072 (0.969101) | 1.085577 / 4.805227 (-3.719651) | 0.225719 / 6.500664 (-6.274945) | 0.080576 / 0.075469 (0.005107) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.802284 / 1.841788 (-0.039504) | 23.487921 / 8.074308 (15.413613) | 20.595171 / 10.191392 (10.403779) | 0.196610 / 0.680424 (-0.483814) | 0.027483 / 0.534201 (-0.506718) | 0.485840 / 0.579283 (-0.093443) | 0.542661 / 0.434364 (0.108297) | 0.580602 / 0.540337 (0.040265) | 0.768195 / 1.386936 (-0.618741) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/937/comments | https://api.github.com/repos/huggingface/datasets/issues/937/events | https://github.com/huggingface/datasets/issues/937 | 753,921,078 | MDU6SXNzdWU3NTM5MjEwNzg= | 937 | Local machine/cluster Beam Datasets example/tutorial | [] | open | false | null | 1 | 2020-12-01T01:11:43Z | 2020-12-23T13:54:56Z | null | null | Hi,
I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output.
Thanks!
Shang | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/937/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/937/timeline | null | null | null | null | false | [
"I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally.\r\nFrom my experience the DirectRunner is fine though, even if it's clearly not memory efficient.\r\n\r\nIt would be awesome though to make it work locally on a SparkRunner !\r\nDid you manage to make your processing work ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/332/comments | https://api.github.com/repos/huggingface/datasets/issues/332/events | https://github.com/huggingface/datasets/pull/332 | 649,140,135 | MDExOlB1bGxSZXF1ZXN0NDQyODMwMzMz | 332 | Add wiki_dpr | [] | closed | false | null | 2 | 2020-07-01T17:12:00Z | 2020-07-06T12:21:17Z | 2020-07-06T12:21:16Z | null | Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder.
Note on the implementation:
- There are two configs: with and without the embeddings (73GB vs 14GB)
- I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing)
- I added the case for lists of urls as input of the download_manager | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/332/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/332/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/332.diff",
"html_url": "https://github.com/huggingface/datasets/pull/332",
"merged_at": "2020-07-06T12:21:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/332.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/332"
} | true | [
"The two configurations don't have the same sizes, I may change that so that they both have 21015300 examples for convenience, even though it's supposed to have 21015324 examples in total.\r\n\r\nOne configuration only has 21015300 examples because it seems that the embeddings of the last 24 examples are missing.",
"It's ok to merge now imo. I'll make another PR if we find a way to have the missing embeddings"
] |
https://api.github.com/repos/huggingface/datasets/issues/5000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5000/comments | https://api.github.com/repos/huggingface/datasets/issues/5000/events | https://github.com/huggingface/datasets/issues/5000 | 1,379,709,398 | I_kwDODunzps5SPLHW | 5,000 | Dataset Viewer issue for asapp/slue | [] | closed | false | null | 9 | 2022-09-20T16:45:45Z | 2022-09-27T07:04:03Z | 2022-09-21T07:24:07Z | null | ### Link
https://huggingface.co/datasets/asapp/slue/viewer/
### Description
Hi,
I wonder how to get the dataset viewer of our slue dataset to work.
Best,
Felix
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5000/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5000/timeline | null | completed | null | null | false | [
"<img width=\"519\" alt=\"Capture d’écran 2022-09-20 à 22 33 47\" src=\"https://user-images.githubusercontent.com/1676121/191358952-1220cb7d-745a-4203-a66b-3c707b25038f.png\">\r\n\r\n```\r\nNot found.\r\n\r\nError code: SplitsResponseNotFound\r\n```\r\n\r\nhttps://datasets-server.huggingface.co/splits?dataset=asapp/slue\r\n\r\n```json\r\n{\"error\":\"Not found.\"}\r\n```",
"I just launched a refresh. It's weird, I don't see any entry for this dataset in the cache, it's a bug on our side. In order to try to understand what happened, did you change the visibility status from private to public, by any chance?",
"The dataset is being refreshed, please retry later.\r\n\r\n<img width=\"802\" alt=\"Capture d’écran 2022-09-20 à 22 39 46\" src=\"https://user-images.githubusercontent.com/1676121/191360072-7cc86486-4e84-4b47-8f9a-4a69fe84a5ac.png\">\r\n",
"OK. We now have an issue because the dataset cannot be streamed, and the dataset viewer relies on it.\r\n\r\nMaybe @huggingface/datasets can help:\r\n\r\n```\r\nError code: StreamingRowsError\r\nException: NotImplementedError\r\nMessage: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\nTraceback: Traceback (most recent call last):\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 337, in get_first_rows_response\r\n rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token)\r\n File \"/src/services/worker/src/worker/utils.py\", line 123, in decorator\r\n return func(*args, **kwargs)\r\n File \"/src/services/worker/src/worker/responses/first_rows.py\", line 65, in get_rows\r\n ds = load_dataset(\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1739, in load_dataset\r\n return builder_instance.as_streaming_dataset(split=split)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py\", line 1025, in as_streaming_dataset\r\n splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n File \"/tmp/modules-cache/datasets_modules/datasets/asapp--slue/adaa0c78233e1a1df9c2f054e690ec5fc3eaf453bd76b80fe5cbe5728e55d9b1/slue.py\", line 189, in _split_generators\r\n dl_dir = dl_manager.download_and_extract(_DL_URLS[config_name])\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 944, in download_and_extract\r\n return self.extract(self.download(url_or_urls))\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 907, in extract\r\n urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/py_utils.py\", line 385, in map_nested\r\n return function(data_struct)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 912, in _extract\r\n protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token)\r\n File \"/src/services/worker/.venv/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py\", line 390, in _get_extraction_protocol\r\n raise NotImplementedError(\r\n NotImplementedError: Extraction protocol for TAR archives like 'https://public-dataset-model-store.awsdev.asapp.com/users/sshon/public/slue/slue-voxpopuli_v0.2_blind.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.\r\n```",
"Thanks @severo, \r\n\r\nDo I have to modify the python script to support streaming so that it can be previewed?\r\nIs there a document somewhere that I can follow?\r\n",
"Hi @fwu-asapp thanks for reporting, and thanks @severo for the investigation.\r\n\r\nAs explained by @severo, the preview requires that your dataset loading script supports streaming.\r\n\r\nThere are several options here:\r\n- the easiest would be to replace the source files, archived using ZIP instead TAR: the TAR format does not allow random access while streaming, but only sequential access; the ZIP files support streaming out of the box.\r\n- alternatively, to stream TAR archives you can use `dl_manager.iter_archive`: the only prerequisite is that your \"index\" files (.tsv) should have been archived before their corresponding audio files, so while iterating the content of the TAR archive, the metadata files appear first. I think this is the case for voxpopuli tar but not for voxceleb.\r\n- if your .tsv files were not archived before their corresponding audio files (I think this is the case for voxceleb), then you should extract the .tsv files and host them separately (you can host them on the same Hugging Face Hub).\r\n - you can take as example, e.g.: https://huggingface.co/datasets/vivos/blob/main/vivos.py\r\n\r\nAs an advanced approach, you can handle both streaming and non-streaming cases separately.\r\n- as for example: https://huggingface.co/datasets/librispeech_asr/blob/main/librispeech_asr.py or https://huggingface.co/datasets/google/fleurs/blob/main/fleurs.py\r\n\r\nSee related discussion:\r\n- https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492",
"Thanks @albertvillanova for your clarification. I'll talk to my collaborators to see if we can replace those files. Let me just close this issue for now.",
"FYI, after replacing the source files with the ZIP ones, the dataset viewer works well. Thanks again to @severo and @albertvillanova for your help!",
"Great! And thank you for sharing that interesting dataset!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5216/comments | https://api.github.com/repos/huggingface/datasets/issues/5216/events | https://github.com/huggingface/datasets/issues/5216 | 1,441,041,947 | I_kwDODunzps5V5I4b | 5,216 | save_elasticsearch_index | [] | open | false | null | 1 | 2022-11-08T23:06:52Z | 2022-11-09T13:16:45Z | null | null | Hi,
I am new to Dataset and elasticsearch. I was wondering is there any equivalent approach to save elasticsearch index as of save_faiss_index locally for later use, to remove the need to re-index a dataset? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5216/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5216/timeline | null | null | null | null | false | [
"Hi ! I think there exist tools to dump and reload an index in your elastic search but I'm not super familiar with it.\r\n\r\nAnyway after reloading an index in elastic search you can call `ds.load_elasticsearch_index` which will connect the index to the dataset without re-indexing"
] |
https://api.github.com/repos/huggingface/datasets/issues/3277 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3277/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3277/comments | https://api.github.com/repos/huggingface/datasets/issues/3277/events | https://github.com/huggingface/datasets/pull/3277 | 1,054,122,656 | PR_kwDODunzps4ujk11 | 3,277 | f-string formatting | [] | closed | false | null | 1 | 2021-11-15T21:37:05Z | 2021-11-19T20:40:08Z | 2021-11-17T16:18:38Z | null | **Fix #3257**
Replaced _.format()_ and _%_ by f-strings in the following modules :
- [x] **tests**
- [x] **metrics**
- [x] **benchmarks**
- [x] **utils**
- [x] **templates**
- [x] **src/Datasets/\*.py**
Modules in **_src/Datasets/_**:
- [x] **commands**
- [x] **features**
- [x] **formatting**
- [x] **io**
- [x] **tasks**
- [x] **utils**
Module **datasets** will not be edited as asked by @mariosasko
-A correction of the first PR (#3267)-
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3277/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3277/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3277.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3277",
"merged_at": "2021-11-17T16:18:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3277.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3277"
} | true | [
"Hello @lhoestq, ```make style``` is applied as asked. :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4947/comments | https://api.github.com/repos/huggingface/datasets/issues/4947/events | https://github.com/huggingface/datasets/pull/4947 | 1,364,967,957 | PR_kwDODunzps4-hvbq | 4,947 | Try to fix the Windows CI after TF update 2.10 | [] | closed | false | null | 1 | 2022-09-07T17:14:49Z | 2022-09-08T09:13:10Z | 2022-09-08T09:13:10Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4947/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4947/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/4947.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4947",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4947.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4947"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4947). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/4389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4389/comments | https://api.github.com/repos/huggingface/datasets/issues/4389/events | https://github.com/huggingface/datasets/pull/4389 | 1,244,693,690 | PR_kwDODunzps44RKMn | 4,389 | Fix bug in gem dataset for wiki_auto_asset_turk config | [] | closed | false | null | 1 | 2022-05-23T07:19:49Z | 2022-05-23T10:38:26Z | 2022-05-23T10:29:55Z | null | This PR fixes some URLs.
Fix #4386. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4389/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4389/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4389.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4389",
"merged_at": "2022-05-23T10:29:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4389.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4389"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/6085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6085/comments | https://api.github.com/repos/huggingface/datasets/issues/6085/events | https://github.com/huggingface/datasets/pull/6085 | 1,824,985,188 | PR_kwDODunzps5WlAyA | 6,085 | Fix `fsspec` download | [] | open | false | null | 3 | 2023-07-27T18:54:47Z | 2023-07-27T19:06:13Z | null | null | Testing `ds = load_dataset("audiofolder", data_files="s3://datasets.huggingface.co/SpeechCommands/v0.01/v0.01_test.tar.gz", storage_options={"anon": True})` and trying to fix the issues raised by `fsspec` ...
TODO: fix
```
self.session = aiobotocore.session.AioSession(**self.kwargs)
TypeError: __init__() got an unexpected keyword argument 'hf'
```
by "preparing `storage_options`" for the `fsspec` head/get | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6085/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6085/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/6085.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6085",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6085.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6085"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006031 / 0.011353 (-0.005322) | 0.003579 / 0.011008 (-0.007429) | 0.080862 / 0.038508 (0.042354) | 0.056660 / 0.023109 (0.033551) | 0.388285 / 0.275898 (0.112387) | 0.422270 / 0.323480 (0.098790) | 0.004651 / 0.007986 (-0.003335) | 0.002895 / 0.004328 (-0.001433) | 0.062767 / 0.004250 (0.058517) | 0.046491 / 0.037052 (0.009438) | 0.389918 / 0.258489 (0.131428) | 0.434650 / 0.293841 (0.140809) | 0.027265 / 0.128546 (-0.101281) | 0.007946 / 0.075646 (-0.067701) | 0.261207 / 0.419271 (-0.158065) | 0.045057 / 0.043533 (0.001525) | 0.391977 / 0.255139 (0.136838) | 0.418525 / 0.283200 (0.135326) | 0.020705 / 0.141683 (-0.120978) | 1.459271 / 1.452155 (0.007116) | 1.516935 / 1.492716 (0.024218) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.174659 / 0.018006 (0.156653) | 0.429627 / 0.000490 (0.429137) | 0.003714 / 0.000200 (0.003514) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023255 / 0.037411 (-0.014156) | 0.073463 / 0.014526 (0.058937) | 0.083000 / 0.176557 (-0.093557) | 0.146704 / 0.737135 (-0.590431) | 0.084419 / 0.296338 (-0.211919) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.392222 / 0.215209 (0.177013) | 3.902620 / 2.077655 (1.824966) | 1.903056 / 1.504120 (0.398936) | 1.753423 / 1.541195 (0.212228) | 1.874547 / 1.468490 (0.406057) | 0.495947 / 4.584777 (-4.088829) | 3.084680 / 3.745712 (-0.661032) | 4.235064 / 5.269862 (-1.034797) | 2.626840 / 4.565676 (-1.938837) | 0.057273 / 0.424275 (-0.367002) | 0.006457 / 0.007607 (-0.001150) | 0.466018 / 0.226044 (0.239974) | 4.648264 / 2.268929 (2.379335) | 2.520293 / 55.444624 (-52.924331) | 2.339393 / 6.876477 (-4.537083) | 2.538848 / 2.142072 (0.396775) | 0.592018 / 4.805227 (-4.213210) | 0.125041 / 6.500664 (-6.375623) | 0.061038 / 0.075469 (-0.014431) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.244285 / 1.841788 (-0.597503) | 18.411576 / 8.074308 (10.337268) | 13.850100 / 10.191392 (3.658708) | 0.131904 / 0.680424 (-0.548520) | 0.016824 / 0.534201 (-0.517377) | 0.328931 / 0.579283 (-0.250352) | 0.364801 / 0.434364 (-0.069563) | 0.376298 / 0.540337 (-0.164039) | 0.525045 / 1.386936 (-0.861891) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006059 / 0.011353 (-0.005294) | 0.003693 / 0.011008 (-0.007315) | 0.062982 / 0.038508 (0.024473) | 0.062155 / 0.023109 (0.039046) | 0.389467 / 0.275898 (0.113568) | 0.437046 / 0.323480 (0.113566) | 0.004823 / 0.007986 (-0.003163) | 0.002935 / 0.004328 (-0.001393) | 0.062679 / 0.004250 (0.058429) | 0.049676 / 0.037052 (0.012623) | 0.418054 / 0.258489 (0.159565) | 0.442467 / 0.293841 (0.148626) | 0.027652 / 0.128546 (-0.100895) | 0.008146 / 0.075646 (-0.067501) | 0.069414 / 0.419271 (-0.349858) | 0.042884 / 0.043533 (-0.000649) | 0.387167 / 0.255139 (0.132028) | 0.418684 / 0.283200 (0.135484) | 0.022419 / 0.141683 (-0.119264) | 1.460606 / 1.452155 (0.008451) | 1.514204 / 1.492716 (0.021487) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200523 / 0.018006 (0.182517) | 0.415970 / 0.000490 (0.415481) | 0.003202 / 0.000200 (0.003002) | 0.000069 / 0.000054 (0.000014) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025836 / 0.037411 (-0.011575) | 0.078859 / 0.014526 (0.064333) | 0.088523 / 0.176557 (-0.088034) | 0.141572 / 0.737135 (-0.595563) | 0.090258 / 0.296338 (-0.206080) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416548 / 0.215209 (0.201339) | 4.155278 / 2.077655 (2.077623) | 2.126683 / 1.504120 (0.622563) | 1.963762 / 1.541195 (0.422568) | 2.029018 / 1.468490 (0.560528) | 0.499005 / 4.584777 (-4.085772) | 3.063503 / 3.745712 (-0.682209) | 4.250800 / 5.269862 (-1.019061) | 2.642634 / 4.565676 (-1.923043) | 0.057815 / 0.424275 (-0.366460) | 0.006784 / 0.007607 (-0.000823) | 0.492481 / 0.226044 (0.266437) | 4.914306 / 2.268929 (2.645377) | 2.601582 / 55.444624 (-52.843042) | 2.337863 / 6.876477 (-4.538614) | 2.462854 / 2.142072 (0.320782) | 0.593738 / 4.805227 (-4.211489) | 0.127030 / 6.500664 (-6.373634) | 0.064206 / 0.075469 (-0.011263) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.326919 / 1.841788 (-0.514868) | 18.728929 / 8.074308 (10.654621) | 13.903681 / 10.191392 (3.712289) | 0.162670 / 0.680424 (-0.517754) | 0.016913 / 0.534201 (-0.517288) | 0.337504 / 0.579283 (-0.241779) | 0.339786 / 0.434364 (-0.094577) | 0.384955 / 0.540337 (-0.155383) | 0.514358 / 1.386936 (-0.872578) |\n\n</details>\n</details>\n\n\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6085). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007610 / 0.011353 (-0.003743) | 0.004616 / 0.011008 (-0.006392) | 0.100330 / 0.038508 (0.061821) | 0.084450 / 0.023109 (0.061341) | 0.386610 / 0.275898 (0.110712) | 0.418479 / 0.323480 (0.094999) | 0.006085 / 0.007986 (-0.001900) | 0.003800 / 0.004328 (-0.000529) | 0.076248 / 0.004250 (0.071997) | 0.065175 / 0.037052 (0.028122) | 0.387154 / 0.258489 (0.128665) | 0.425484 / 0.293841 (0.131643) | 0.035946 / 0.128546 (-0.092601) | 0.009901 / 0.075646 (-0.065745) | 0.343015 / 0.419271 (-0.076256) | 0.060965 / 0.043533 (0.017432) | 0.390585 / 0.255139 (0.135446) | 0.405873 / 0.283200 (0.122673) | 0.026929 / 0.141683 (-0.114754) | 1.767916 / 1.452155 (0.315761) | 1.893431 / 1.492716 (0.400715) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237888 / 0.018006 (0.219882) | 0.503949 / 0.000490 (0.503459) | 0.004769 / 0.000200 (0.004570) | 0.000088 / 0.000054 (0.000033) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031553 / 0.037411 (-0.005859) | 0.096950 / 0.014526 (0.082424) | 0.110374 / 0.176557 (-0.066183) | 0.176754 / 0.737135 (-0.560381) | 0.111703 / 0.296338 (-0.184635) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449232 / 0.215209 (0.234023) | 4.510247 / 2.077655 (2.432592) | 2.188547 / 1.504120 (0.684427) | 2.007530 / 1.541195 (0.466335) | 2.095650 / 1.468490 (0.627160) | 0.563262 / 4.584777 (-4.021515) | 4.062412 / 3.745712 (0.316700) | 6.338350 / 5.269862 (1.068489) | 3.844669 / 4.565676 (-0.721008) | 0.064517 / 0.424275 (-0.359758) | 0.008536 / 0.007607 (0.000929) | 0.553872 / 0.226044 (0.327828) | 5.530311 / 2.268929 (3.261383) | 2.835109 / 55.444624 (-52.609516) | 2.493900 / 6.876477 (-4.382577) | 2.728412 / 2.142072 (0.586340) | 0.680161 / 4.805227 (-4.125066) | 0.155831 / 6.500664 (-6.344833) | 0.070359 / 0.075469 (-0.005110) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.504852 / 1.841788 (-0.336936) | 22.806335 / 8.074308 (14.732027) | 16.598386 / 10.191392 (6.406994) | 0.207857 / 0.680424 (-0.472566) | 0.021425 / 0.534201 (-0.512776) | 0.474069 / 0.579283 (-0.105214) | 0.472263 / 0.434364 (0.037899) | 0.542195 / 0.540337 (0.001858) | 0.782871 / 1.386936 (-0.604065) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007443 / 0.011353 (-0.003910) | 0.004465 / 0.011008 (-0.006544) | 0.076268 / 0.038508 (0.037759) | 0.086607 / 0.023109 (0.063498) | 0.443295 / 0.275898 (0.167397) | 0.472819 / 0.323480 (0.149339) | 0.005841 / 0.007986 (-0.002144) | 0.003727 / 0.004328 (-0.000602) | 0.076015 / 0.004250 (0.071765) | 0.063188 / 0.037052 (0.026136) | 0.450555 / 0.258489 (0.192066) | 0.478532 / 0.293841 (0.184691) | 0.036258 / 0.128546 (-0.092288) | 0.009869 / 0.075646 (-0.065777) | 0.083786 / 0.419271 (-0.335486) | 0.056546 / 0.043533 (0.013013) | 0.449647 / 0.255139 (0.194508) | 0.457588 / 0.283200 (0.174389) | 0.027197 / 0.141683 (-0.114486) | 1.769991 / 1.452155 (0.317836) | 1.859905 / 1.492716 (0.367189) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.268637 / 0.018006 (0.250631) | 0.492860 / 0.000490 (0.492370) | 0.008574 / 0.000200 (0.008374) | 0.000140 / 0.000054 (0.000085) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037679 / 0.037411 (0.000268) | 0.108258 / 0.014526 (0.093733) | 0.117850 / 0.176557 (-0.058707) | 0.181611 / 0.737135 (-0.555524) | 0.120901 / 0.296338 (-0.175437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485780 / 0.215209 (0.270571) | 4.851289 / 2.077655 (2.773635) | 2.486068 / 1.504120 (0.981948) | 2.299417 / 1.541195 (0.758222) | 2.387093 / 1.468490 (0.918603) | 0.568826 / 4.584777 (-4.015951) | 4.163426 / 3.745712 (0.417713) | 6.224964 / 5.269862 (0.955102) | 3.255619 / 4.565676 (-1.310058) | 0.067081 / 0.424275 (-0.357194) | 0.009065 / 0.007607 (0.001458) | 0.580449 / 0.226044 (0.354405) | 5.786394 / 2.268929 (3.517465) | 3.057780 / 55.444624 (-52.386844) | 2.764339 / 6.876477 (-4.112138) | 2.880718 / 2.142072 (0.738645) | 0.681376 / 4.805227 (-4.123851) | 0.157858 / 6.500664 (-6.342806) | 0.072481 / 0.075469 (-0.002988) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.590704 / 1.841788 (-0.251083) | 23.141929 / 8.074308 (15.067620) | 17.001141 / 10.191392 (6.809749) | 0.203790 / 0.680424 (-0.476634) | 0.021766 / 0.534201 (-0.512435) | 0.475309 / 0.579283 (-0.103974) | 0.466448 / 0.434364 (0.032084) | 0.551470 / 0.540337 (0.011132) | 0.727876 / 1.386936 (-0.659060) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2350/comments | https://api.github.com/repos/huggingface/datasets/issues/2350/events | https://github.com/huggingface/datasets/issues/2350 | 889,580,247 | MDU6SXNzdWU4ODk1ODAyNDc= | 2,350 | `FaissIndex.save` throws error on GPU | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-05-12T03:41:56Z | 2021-05-17T13:41:41Z | 2021-05-17T13:41:41Z | null | ## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 470, in save_faiss_index
index.save(file)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 334, in save
faiss.write_index(index, str(file))
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/faiss/swigfaiss_avx2.py", line 5654, in write_index
return _swigfaiss.write_index(*args)
RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /root/miniconda3/conda-bld/faiss-pkg_1613235005464/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index
```
## Steps to reproduce the bug
Any dataset will do, I just selected a familiar one.
```python
import numpy as np
import datasets
INDEX_STR = "OPQ16_128,IVF512,PQ32"
INDEX_SAVE_PATH = "will_not_save.faiss"
data = datasets.load_dataset("Fraser/news-category-dataset", split=f"train[:10000]")
def encode(item):
return {"text_emb": np.random.randn(768).astype(np.float32)}
data = data.map(encode)
data.add_faiss_index(column="text_emb", string_factory=INDEX_STR, train_size=10_000, device=0)
data.save_faiss_index("text_emb", INDEX_SAVE_PATH)
```
## Expected results
Saving the index
## Actual results
Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) ... don't know how to serialize this type of index
## Environment info
- `datasets` version: 1.6.2
- Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
I will be proposing a fix in a couple of minutes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2350/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2350/timeline | null | completed | null | null | false | [
"Just in case, this is a workaround that I use in my code and it seems to do the job.\r\n\r\n```python\r\nif use_gpu_index:\r\n data[\"train\"]._indexes[\"text_emb\"].faiss_index = faiss.index_gpu_to_cpu(data[\"train\"]._indexes[\"text_emb\"].faiss_index)\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/3323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3323/comments | https://api.github.com/repos/huggingface/datasets/issues/3323/events | https://github.com/huggingface/datasets/pull/3323 | 1,064,660,452 | PR_kwDODunzps4vEZwq | 3,323 | Fix wrongly converted assert | [] | closed | false | null | 1 | 2021-11-26T16:05:39Z | 2021-11-26T16:44:12Z | 2021-11-26T16:44:11Z | null | Seems like this assertion was replaced by an exception but the condition got wrongly converted. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3323/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3323/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3323.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3323",
"merged_at": "2021-11-26T16:44:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3323.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3323"
} | true | [
"Closes #3327 "
] |
https://api.github.com/repos/huggingface/datasets/issues/3465 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3465/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3465/comments | https://api.github.com/repos/huggingface/datasets/issues/3465/events | https://github.com/huggingface/datasets/issues/3465 | 1,085,400,432 | I_kwDODunzps5AseVw | 3,465 | Unable to load 'cnn_dailymail' dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 3 | 2021-12-21T03:32:21Z | 2022-02-17T14:13:57Z | 2022-02-17T14:13:57Z | null | ## Describe the bug
I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True)
```
## Expected results
Expecting to load 'cnn_dailymail' dataset.
## Actual results
`NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3465/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3465/timeline | null | completed | null | null | false | [
"Hi @talha1503, thanks for reporting.\r\n\r\nIt seems there is an issue with one of the data files hosted at Google Drive:\r\n```\r\nGoogle Drive - Quota exceeded\r\n\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nAs you probably know, Hugging Face does not host the data, and in this case the data owner decided to host their data at Google Drive, which has quota limits.\r\n\r\nIs there anything we could do, @lhoestq @mariosasko?",
"This looks related to https://github.com/huggingface/datasets/issues/996",
"It seems that [this](https://huggingface.co/datasets/ccdv/cnn_dailymail) copy of the dataset has fixed the problem"
] |
https://api.github.com/repos/huggingface/datasets/issues/5232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5232/comments | https://api.github.com/repos/huggingface/datasets/issues/5232/events | https://github.com/huggingface/datasets/issues/5232 | 1,446,294,165 | I_kwDODunzps5WNLKV | 5,232 | Incompatible dill versions in datasets 2.6.1 | [] | closed | false | null | 2 | 2022-11-12T06:46:23Z | 2022-11-14T08:24:43Z | 2022-11-14T08:07:59Z | null | ### Describe the bug
datasets version 2.6.1 has a dependency on dill<0.3.6. This causes a conflict with dill>=0.3.6 used by multiprocess dependency in datasets 2.6.1
This issue is already fixed in https://github.com/huggingface/datasets/pull/5166/files, but not yet been released. Please release a new version of the datasets library to fix this.
### Steps to reproduce the bug
1. Create requirements.in with only dependency being datasets (or datasets[s3])
2. Run pip-compile
3. The output is as follows:
```
Could not find a version that matches dill<0.3.6,>=0.3.6 (from datasets[s3]==2.6.1->-r requirements.in (line 1))
Tried: 0.2, 0.2, 0.2.1, 0.2.1, 0.2.2, 0.2.2, 0.2.3, 0.2.3, 0.2.4, 0.2.4, 0.2.5, 0.2.5, 0.2.6, 0.2.7, 0.2.7.1, 0.2.8, 0.2.8.1, 0.2.8.2, 0.2.9, 0.3.0, 0.3.1, 0.3.1.1, 0.3.2, 0.3.3, 0.3.3, 0.3.4, 0.3.4, 0.3.5, 0.3.5, 0.3.5.1, 0.3.5.1, 0.3.6, 0.3.6
Skipped pre-versions: 0.1a1, 0.2a1, 0.2a1, 0.2b1, 0.2b1
There are incompatible versions in the resolved dependencies:
dill<0.3.6 (from datasets[s3]==2.6.1->-r requirements.in (line 1))
dill>=0.3.6 (from multiprocess==0.70.14->datasets[s3]==2.6.1->-r requirements.in (line 1))
```
### Expected behavior
pip-compile produces requirements.txt without any conflicts
### Environment info
datasets version 2.6.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5232/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5232/timeline | null | completed | null | null | false | [
"Thanks for reporting, @vinaykakade.\r\n\r\nWe are discussing about making a release early this week.\r\n\r\nPlease note that in the meantime, in your specific case (as we also pointed out here: https://github.com/huggingface/datasets/issues/5162#issuecomment-1291720293), you can circumvent the issue by pinning `multiprocess` to 0.70.13 version (instead of using latest 0.70.14).\r\n\r\nDuplicate of:\r\n- https://github.com/huggingface/datasets/issues/5162",
"You can also make `pip-compile` work by using the backtracking resolver (instead of the legacy one): https://pip-tools.readthedocs.io/en/latest/#a-note-on-resolvers\r\n```\r\npip-compile --resolver=backtracking requirements.in\r\n```\r\nThis resolver will automatically use `multiprocess` 0.70.13 version.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2091 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2091/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2091/comments | https://api.github.com/repos/huggingface/datasets/issues/2091/events | https://github.com/huggingface/datasets/pull/2091 | 836,831,403 | MDExOlB1bGxSZXF1ZXN0NTk3Mjk4ODI3 | 2,091 | Fix copy snippet in docs | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 0 | 2021-03-20T15:08:22Z | 2021-03-24T08:20:50Z | 2021-03-23T17:18:31Z | null | With this change the lines starting with `...` in the code blocks can be properly copied to clipboard. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2091/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2091/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2091.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2091",
"merged_at": "2021-03-23T17:18:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2091.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2091"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2141/comments | https://api.github.com/repos/huggingface/datasets/issues/2141/events | https://github.com/huggingface/datasets/pull/2141 | 843,914,790 | MDExOlB1bGxSZXF1ZXN0NjAzMjM2MjUw | 2,141 | added spans field for the wikiann datasets | [] | closed | false | null | 3 | 2021-03-29T23:38:26Z | 2021-03-31T13:27:50Z | 2021-03-31T13:27:50Z | null | Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2141/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2141/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2141.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2141",
"merged_at": "2021-03-31T13:27:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2141.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2141"
} | true | [
"Hi @lhoestq \r\nThanks a lot for taking time checking it. I update \"dataset_infos.json\", I added description to the function of _generate_samples in wikiann.py but I was not sure about the format to write in README. thanks. ",
"Thanks !\r\n\r\nFor the fields description in the dataset card, something like this does the job:\r\n```\r\n- `tokens`: a `list` of `string` features.\r\n- `langs`: a `list` of `string` features that correspond to the language of each token.\r\n- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4), `B-LOC` (5), `I-LOC` (6).\r\n- `spans`: a `list` of `string` features, that is the list of named entities in the input text formatted as ``<TAG>: <mention>``\r\n```\r\n\r\nAlso for information, I think the trailer of rick and morty season 5 is out now :)",
"Hi @lhoestq \r\nthank you! This is updated now, please feel free to let me know if I need to modify something :) thanks "
] |
https://api.github.com/repos/huggingface/datasets/issues/2068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2068/comments | https://api.github.com/repos/huggingface/datasets/issues/2068/events | https://github.com/huggingface/datasets/issues/2068 | 833,602,832 | MDU6SXNzdWU4MzM2MDI4MzI= | 2,068 | PyTorch not available error on SageMaker GPU docker though it is installed | [] | closed | false | null | 7 | 2021-03-17T10:04:27Z | 2021-06-14T04:47:30Z | 2021-06-14T04:47:30Z | null | I get en error when running data loading using SageMaker SDK
```
File "main.py", line 34, in <module>
run_training()
File "main.py", line 25, in run_training
dm.setup('fit')
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn
return fn(*args, **kwargs)
File "/opt/ml/code/data_module.py", line 103, in setup
self.dataset[split].set_format(type="torch", columns=self.columns)
File "/opt/conda/lib/python3.6/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 995, in set_format
_ = get_formatter(type, **format_kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/formatting/__init__.py", line 114, in get_formatter
raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type]
ValueError: PyTorch needs to be installed to be able to return PyTorch tensors.
```
when trying to execute dataset loading using this notebook https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb, specifically lines
```
self.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns]
self.dataset[split].set_format(type="torch", columns=self.columns)
```
The SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.4.0-gpu-py3 .
By running container interactively I have checked that torch loading completes successfully by executing `https://github.com/huggingface/datasets/blob/master/src/datasets/config.py#L39`.
Also as a first line in the data loading module I have
```
import os
os.environ["USE_TF"] = "0"
os.environ["USE_TORCH"] = "1"
````
But unfortunately the error stills persists. Any suggestions would be appreciated as I am stack.
Many Thanks!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2068/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2068/timeline | null | completed | null | null | false | [
"cc @philschmid ",
"Hey @sivakhno,\r\n\r\nhow does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`",
"Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. \r\nI have also tried using `torch=1.6.0` (docker `763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu-py3 `), but the error is the same. ",
"Could paste the code you use the start your training job and the fine-tuning script you run? ",
"@sivakhno this should be now fixed in `datasets>=1.5.0`. ",
"@philschmid Recently released tensorflow-macos seems to be missing. ",
"I've created a PR to add this. "
] |
https://api.github.com/repos/huggingface/datasets/issues/5886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5886/comments | https://api.github.com/repos/huggingface/datasets/issues/5886/events | https://github.com/huggingface/datasets/issues/5886 | 1,721,070,225 | I_kwDODunzps5mlXKR | 5,886 | Use work-stealing algorithm when parallel computing | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2023-05-23T03:08:44Z | 2023-05-24T15:30:09Z | null | null | ### Feature request
when i used Dataset.map api to process data concurrently, i found that
it gets slower and slower as it gets closer to completion. Then i read the source code of arrow_dataset.py and found that it shard the dataset and use multiprocessing pool to execute each shard.It may cause the slowest task to drag out the entire program's execution time,especially when processing huge dataset.
### Motivation
using work-stealing algorithm instead of sharding and parallel computing to optimize performance.
### Your contribution
just an idea. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5886/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5886/timeline | null | null | null | null | false | [
"Alternatively we could set the number of shards to be a factor than the number of processes (current they're equal) - this way it will be less likely to end up with a shard that is significantly slower than all the other ones."
] |
https://api.github.com/repos/huggingface/datasets/issues/4489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4489/comments | https://api.github.com/repos/huggingface/datasets/issues/4489/events | https://github.com/huggingface/datasets/pull/4489 | 1,270,706,195 | PR_kwDODunzps45oONF | 4,489 | Add SV-Ident dataset | [] | closed | false | null | 5 | 2022-06-14T12:09:00Z | 2022-06-20T08:48:26Z | 2022-06-20T08:37:27Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4489/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4489/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4489.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4489",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4489.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4489"
} | true | [
"Hi @e-tornike, thanks a lot for adding this interesting dataset.\r\n\r\nRecently at Hugging Face, we have decided to give priority to adding datasets directly on the Hub. Would you mind to transfer your loading script to the Hub? You could create a dedicated org namespace, so that your dataset would be accessible using `vadis/sv_ident` or `sdproc/sv_ident` or `coling/sv_ident` (as you prefer).\r\n\r\nYou have an example here: https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus",
"Additionally, please feel free to ping us if you need assistance/help in creating this dataset.\r\n\r\nYou could put the link to your Hub dataset here in this Issue discussion page, so that we can follow the progress. :)",
"Hi @albertvillanova, thanks for the feedback! Uploading via the Hub is a lot easier! \r\n\r\nI've uploaded the dataset here: https://huggingface.co/datasets/vadis/sv-ident, but it's reporting a \"Status400Error\". Is there any way to see the logs of the dataset script and what is causing the error?",
"Hi @e-tornike, good job at https://huggingface.co/datasets/vadis/sv-ident.\r\n\r\nNormally, you can run locally the loading of the dataset by passing `streaming=True` (as the previewer does):\r\n```python\r\nds = load_dataset(\"path/to/sv_ident.py, split=\"train\", streaming=True)\r\nitem = next(iter(ds))\r\nitem\r\n```\r\n\r\nLet me have a look and open a discussion on your Hub repo! ;)",
"I've opened an Issue: \r\n- #4527 "
] |
https://api.github.com/repos/huggingface/datasets/issues/5915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5915/comments | https://api.github.com/repos/huggingface/datasets/issues/5915/events | https://github.com/huggingface/datasets/pull/5915 | 1,732,389,984 | PR_kwDODunzps5RsVzj | 5,915 | Raise error in `DatasetBuilder.as_dataset` when `file_format` is not `"arrow"` | [] | closed | false | null | 4 | 2023-05-30T14:27:55Z | 2023-05-31T13:31:21Z | 2023-05-31T13:23:54Z | null | Raise an error in `DatasetBuilder.as_dataset` when `file_format != "arrow"` (and fix the docstring)
Fix #5874 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5915/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5915/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5915.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5915",
"merged_at": "2023-05-31T13:23:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5915.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5915"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006416 / 0.011353 (-0.004937) | 0.004278 / 0.011008 (-0.006731) | 0.097562 / 0.038508 (0.059054) | 0.029488 / 0.023109 (0.006379) | 0.308648 / 0.275898 (0.032750) | 0.339879 / 0.323480 (0.016399) | 0.005288 / 0.007986 (-0.002697) | 0.005033 / 0.004328 (0.000704) | 0.074666 / 0.004250 (0.070416) | 0.034888 / 0.037052 (-0.002164) | 0.309960 / 0.258489 (0.051471) | 0.344276 / 0.293841 (0.050435) | 0.025564 / 0.128546 (-0.102982) | 0.008579 / 0.075646 (-0.067067) | 0.319796 / 0.419271 (-0.099476) | 0.044786 / 0.043533 (0.001253) | 0.308888 / 0.255139 (0.053749) | 0.334001 / 0.283200 (0.050802) | 0.089917 / 0.141683 (-0.051766) | 1.456696 / 1.452155 (0.004541) | 1.542273 / 1.492716 (0.049557) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213236 / 0.018006 (0.195230) | 0.425139 / 0.000490 (0.424650) | 0.008831 / 0.000200 (0.008631) | 0.000209 / 0.000054 (0.000155) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023990 / 0.037411 (-0.013421) | 0.096787 / 0.014526 (0.082261) | 0.105783 / 0.176557 (-0.070774) | 0.167182 / 0.737135 (-0.569954) | 0.108896 / 0.296338 (-0.187442) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419844 / 0.215209 (0.204635) | 4.201909 / 2.077655 (2.124254) | 1.910784 / 1.504120 (0.406664) | 1.685183 / 1.541195 (0.143988) | 1.716927 / 1.468490 (0.248437) | 0.548261 / 4.584777 (-4.036516) | 3.414168 / 3.745712 (-0.331544) | 1.695446 / 5.269862 (-3.574415) | 0.989668 / 4.565676 (-3.576008) | 0.067328 / 0.424275 (-0.356948) | 0.012084 / 0.007607 (0.004477) | 0.523799 / 0.226044 (0.297754) | 5.240589 / 2.268929 (2.971661) | 2.331618 / 55.444624 (-53.113007) | 1.996094 / 6.876477 (-4.880383) | 2.105450 / 2.142072 (-0.036623) | 0.654614 / 4.805227 (-4.150613) | 0.134721 / 6.500664 (-6.365943) | 0.066227 / 0.075469 (-0.009242) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196266 / 1.841788 (-0.645521) | 13.990045 / 8.074308 (5.915737) | 13.928126 / 10.191392 (3.736734) | 0.142600 / 0.680424 (-0.537824) | 0.016462 / 0.534201 (-0.517739) | 0.363113 / 0.579283 (-0.216170) | 0.428590 / 0.434364 (-0.005773) | 0.452594 / 0.540337 (-0.087743) | 0.551678 / 1.386936 (-0.835258) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005992 / 0.011353 (-0.005361) | 0.004161 / 0.011008 (-0.006847) | 0.076098 / 0.038508 (0.037589) | 0.028559 / 0.023109 (0.005450) | 0.411696 / 0.275898 (0.135798) | 0.444519 / 0.323480 (0.121040) | 0.004965 / 0.007986 (-0.003021) | 0.003452 / 0.004328 (-0.000876) | 0.075107 / 0.004250 (0.070857) | 0.037305 / 0.037052 (0.000252) | 0.429728 / 0.258489 (0.171239) | 0.444313 / 0.293841 (0.150472) | 0.025278 / 0.128546 (-0.103268) | 0.008527 / 0.075646 (-0.067120) | 0.081502 / 0.419271 (-0.337770) | 0.041237 / 0.043533 (-0.002296) | 0.417848 / 0.255139 (0.162709) | 0.426615 / 0.283200 (0.143415) | 0.094641 / 0.141683 (-0.047041) | 1.525141 / 1.452155 (0.072987) | 1.615608 / 1.492716 (0.122892) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192867 / 0.018006 (0.174861) | 0.414979 / 0.000490 (0.414490) | 0.000815 / 0.000200 (0.000615) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025354 / 0.037411 (-0.012058) | 0.102085 / 0.014526 (0.087559) | 0.107930 / 0.176557 (-0.068626) | 0.160483 / 0.737135 (-0.576652) | 0.112341 / 0.296338 (-0.183997) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446938 / 0.215209 (0.231728) | 4.480057 / 2.077655 (2.402402) | 2.154825 / 1.504120 (0.650705) | 1.942774 / 1.541195 (0.401580) | 1.996418 / 1.468490 (0.527928) | 0.556728 / 4.584777 (-4.028049) | 3.441228 / 3.745712 (-0.304484) | 3.004179 / 5.269862 (-2.265683) | 1.314104 / 4.565676 (-3.251573) | 0.068670 / 0.424275 (-0.355606) | 0.011972 / 0.007607 (0.004365) | 0.556604 / 0.226044 (0.330560) | 5.561783 / 2.268929 (3.292855) | 2.631262 / 55.444624 (-52.813363) | 2.262143 / 6.876477 (-4.614333) | 2.364243 / 2.142072 (0.222170) | 0.660621 / 4.805227 (-4.144607) | 0.137371 / 6.500664 (-6.363293) | 0.069104 / 0.075469 (-0.006365) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.305706 / 1.841788 (-0.536081) | 14.015932 / 8.074308 (5.941624) | 14.353580 / 10.191392 (4.162187) | 0.146172 / 0.680424 (-0.534251) | 0.016699 / 0.534201 (-0.517502) | 0.357970 / 0.579283 (-0.221313) | 0.389067 / 0.434364 (-0.045297) | 0.415470 / 0.540337 (-0.124867) | 0.501359 / 1.386936 (-0.885577) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006800 / 0.011353 (-0.004552) | 0.004721 / 0.011008 (-0.006287) | 0.097760 / 0.038508 (0.059252) | 0.034192 / 0.023109 (0.011083) | 0.298240 / 0.275898 (0.022342) | 0.331119 / 0.323480 (0.007639) | 0.005826 / 0.007986 (-0.002160) | 0.003968 / 0.004328 (-0.000360) | 0.073833 / 0.004250 (0.069582) | 0.046288 / 0.037052 (0.009236) | 0.303018 / 0.258489 (0.044529) | 0.342163 / 0.293841 (0.048322) | 0.028504 / 0.128546 (-0.100042) | 0.009031 / 0.075646 (-0.066615) | 0.331617 / 0.419271 (-0.087655) | 0.060911 / 0.043533 (0.017379) | 0.304044 / 0.255139 (0.048905) | 0.328959 / 0.283200 (0.045759) | 0.113174 / 0.141683 (-0.028509) | 1.424652 / 1.452155 (-0.027502) | 1.531392 / 1.492716 (0.038676) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206175 / 0.018006 (0.188169) | 0.435916 / 0.000490 (0.435426) | 0.002587 / 0.000200 (0.002387) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026996 / 0.037411 (-0.010415) | 0.106722 / 0.014526 (0.092196) | 0.117655 / 0.176557 (-0.058902) | 0.176969 / 0.737135 (-0.560166) | 0.122577 / 0.296338 (-0.173762) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396086 / 0.215209 (0.180877) | 3.972465 / 2.077655 (1.894811) | 1.800798 / 1.504120 (0.296678) | 1.616747 / 1.541195 (0.075552) | 1.680711 / 1.468490 (0.212221) | 0.526479 / 4.584777 (-4.058298) | 3.791528 / 3.745712 (0.045816) | 2.989518 / 5.269862 (-2.280344) | 1.463221 / 4.565676 (-3.102455) | 0.065649 / 0.424275 (-0.358626) | 0.012155 / 0.007607 (0.004548) | 0.500241 / 0.226044 (0.274197) | 5.008895 / 2.268929 (2.739966) | 2.315288 / 55.444624 (-53.129336) | 1.959409 / 6.876477 (-4.917067) | 2.102371 / 2.142072 (-0.039701) | 0.639611 / 4.805227 (-4.165617) | 0.140101 / 6.500664 (-6.360563) | 0.063599 / 0.075469 (-0.011870) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206729 / 1.841788 (-0.635059) | 15.127250 / 8.074308 (7.052942) | 14.397228 / 10.191392 (4.205836) | 0.148802 / 0.680424 (-0.531622) | 0.017628 / 0.534201 (-0.516573) | 0.396150 / 0.579283 (-0.183133) | 0.435826 / 0.434364 (0.001462) | 0.471215 / 0.540337 (-0.069122) | 0.559413 / 1.386936 (-0.827523) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006479 / 0.011353 (-0.004874) | 0.004520 / 0.011008 (-0.006488) | 0.074395 / 0.038508 (0.035887) | 0.033400 / 0.023109 (0.010291) | 0.388411 / 0.275898 (0.112513) | 0.396714 / 0.323480 (0.073234) | 0.005736 / 0.007986 (-0.002250) | 0.004038 / 0.004328 (-0.000291) | 0.073595 / 0.004250 (0.069345) | 0.045207 / 0.037052 (0.008155) | 0.378096 / 0.258489 (0.119607) | 0.417830 / 0.293841 (0.123989) | 0.028365 / 0.128546 (-0.100181) | 0.008887 / 0.075646 (-0.066760) | 0.080766 / 0.419271 (-0.338505) | 0.046923 / 0.043533 (0.003390) | 0.376190 / 0.255139 (0.121051) | 0.385875 / 0.283200 (0.102675) | 0.107542 / 0.141683 (-0.034141) | 1.409257 / 1.452155 (-0.042898) | 1.518475 / 1.492716 (0.025759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223299 / 0.018006 (0.205292) | 0.440640 / 0.000490 (0.440150) | 0.000397 / 0.000200 (0.000197) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031388 / 0.037411 (-0.006024) | 0.113078 / 0.014526 (0.098552) | 0.124398 / 0.176557 (-0.052159) | 0.173802 / 0.737135 (-0.563333) | 0.129555 / 0.296338 (-0.166783) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440220 / 0.215209 (0.225011) | 4.398052 / 2.077655 (2.320398) | 2.188396 / 1.504120 (0.684276) | 1.997811 / 1.541195 (0.456616) | 2.093338 / 1.468490 (0.624847) | 0.519597 / 4.584777 (-4.065180) | 3.885795 / 3.745712 (0.140083) | 2.896327 / 5.269862 (-2.373534) | 1.245785 / 4.565676 (-3.319891) | 0.065675 / 0.424275 (-0.358600) | 0.011729 / 0.007607 (0.004121) | 0.541526 / 0.226044 (0.315482) | 5.406763 / 2.268929 (3.137834) | 2.722914 / 55.444624 (-52.721711) | 2.471111 / 6.876477 (-4.405366) | 2.541488 / 2.142072 (0.399415) | 0.633566 / 4.805227 (-4.171661) | 0.139622 / 6.500664 (-6.361042) | 0.064220 / 0.075469 (-0.011249) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296097 / 1.841788 (-0.545690) | 15.095320 / 8.074308 (7.021012) | 14.300821 / 10.191392 (4.109429) | 0.145470 / 0.680424 (-0.534954) | 0.017496 / 0.534201 (-0.516705) | 0.400589 / 0.579283 (-0.178694) | 0.423091 / 0.434364 (-0.011273) | 0.468258 / 0.540337 (-0.072079) | 0.570873 / 1.386936 (-0.816063) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005918 / 0.011353 (-0.005435) | 0.004393 / 0.011008 (-0.006615) | 0.091677 / 0.038508 (0.053169) | 0.033546 / 0.023109 (0.010437) | 0.344682 / 0.275898 (0.068784) | 0.388906 / 0.323480 (0.065426) | 0.005412 / 0.007986 (-0.002574) | 0.004909 / 0.004328 (0.000580) | 0.082589 / 0.004250 (0.078339) | 0.045242 / 0.037052 (0.008190) | 0.339191 / 0.258489 (0.080702) | 0.349673 / 0.293841 (0.055832) | 0.026805 / 0.128546 (-0.101742) | 0.007529 / 0.075646 (-0.068117) | 0.319108 / 0.419271 (-0.100164) | 0.049482 / 0.043533 (0.005949) | 0.320013 / 0.255139 (0.064874) | 0.342059 / 0.283200 (0.058859) | 0.096623 / 0.141683 (-0.045060) | 1.458204 / 1.452155 (0.006049) | 1.571172 / 1.492716 (0.078455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235171 / 0.018006 (0.217165) | 0.479678 / 0.000490 (0.479188) | 0.006627 / 0.000200 (0.006427) | 0.000257 / 0.000054 (0.000202) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025716 / 0.037411 (-0.011696) | 0.107730 / 0.014526 (0.093204) | 0.111595 / 0.176557 (-0.064962) | 0.171316 / 0.737135 (-0.565819) | 0.118962 / 0.296338 (-0.177377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.376318 / 0.215209 (0.161109) | 4.039484 / 2.077655 (1.961829) | 1.811548 / 1.504120 (0.307428) | 1.646728 / 1.541195 (0.105533) | 1.688071 / 1.468490 (0.219581) | 0.551256 / 4.584777 (-4.033520) | 4.153931 / 3.745712 (0.408218) | 3.424154 / 5.269862 (-1.845707) | 1.734860 / 4.565676 (-2.830816) | 0.067753 / 0.424275 (-0.356522) | 0.012699 / 0.007607 (0.005092) | 0.505722 / 0.226044 (0.279677) | 4.997321 / 2.268929 (2.728392) | 2.258755 / 55.444624 (-53.185869) | 1.954382 / 6.876477 (-4.922095) | 1.967545 / 2.142072 (-0.174527) | 0.630489 / 4.805227 (-4.174738) | 0.138738 / 6.500664 (-6.361926) | 0.064907 / 0.075469 (-0.010562) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.209634 / 1.841788 (-0.632154) | 15.055062 / 8.074308 (6.980754) | 12.721606 / 10.191392 (2.530214) | 0.164908 / 0.680424 (-0.515516) | 0.019528 / 0.534201 (-0.514673) | 0.400136 / 0.579283 (-0.179147) | 0.451640 / 0.434364 (0.017276) | 0.466272 / 0.540337 (-0.074065) | 0.553258 / 1.386936 (-0.833679) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006341 / 0.011353 (-0.005011) | 0.004617 / 0.011008 (-0.006391) | 0.077953 / 0.038508 (0.039445) | 0.031104 / 0.023109 (0.007995) | 0.360328 / 0.275898 (0.084430) | 0.408403 / 0.323480 (0.084923) | 0.005704 / 0.007986 (-0.002282) | 0.003588 / 0.004328 (-0.000741) | 0.071441 / 0.004250 (0.067190) | 0.043520 / 0.037052 (0.006468) | 0.375798 / 0.258489 (0.117309) | 0.400955 / 0.293841 (0.107114) | 0.028166 / 0.128546 (-0.100381) | 0.008578 / 0.075646 (-0.067068) | 0.086673 / 0.419271 (-0.332598) | 0.046424 / 0.043533 (0.002891) | 0.367276 / 0.255139 (0.112137) | 0.414550 / 0.283200 (0.131351) | 0.097355 / 0.141683 (-0.044328) | 1.465191 / 1.452155 (0.013036) | 1.555028 / 1.492716 (0.062312) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196642 / 0.018006 (0.178636) | 0.464221 / 0.000490 (0.463731) | 0.002726 / 0.000200 (0.002526) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028078 / 0.037411 (-0.009333) | 0.110762 / 0.014526 (0.096236) | 0.122212 / 0.176557 (-0.054344) | 0.164758 / 0.737135 (-0.572377) | 0.133969 / 0.296338 (-0.162370) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448134 / 0.215209 (0.232925) | 4.339335 / 2.077655 (2.261680) | 2.129209 / 1.504120 (0.625089) | 1.957805 / 1.541195 (0.416611) | 1.994038 / 1.468490 (0.525548) | 0.497101 / 4.584777 (-4.087676) | 4.114432 / 3.745712 (0.368720) | 3.437305 / 5.269862 (-1.832556) | 1.692810 / 4.565676 (-2.872866) | 0.071077 / 0.424275 (-0.353198) | 0.012735 / 0.007607 (0.005128) | 0.534393 / 0.226044 (0.308348) | 5.217445 / 2.268929 (2.948517) | 2.594858 / 55.444624 (-52.849766) | 2.317464 / 6.876477 (-4.559012) | 2.337974 / 2.142072 (0.195902) | 0.622291 / 4.805227 (-4.182936) | 0.144934 / 6.500664 (-6.355730) | 0.068524 / 0.075469 (-0.006945) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310601 / 1.841788 (-0.531187) | 15.771527 / 8.074308 (7.697219) | 13.952032 / 10.191392 (3.760640) | 0.212473 / 0.680424 (-0.467951) | 0.017963 / 0.534201 (-0.516238) | 0.400755 / 0.579283 (-0.178528) | 0.439817 / 0.434364 (0.005453) | 0.472614 / 0.540337 (-0.067724) | 0.558410 / 1.386936 (-0.828526) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5086/comments | https://api.github.com/repos/huggingface/datasets/issues/5086/events | https://github.com/huggingface/datasets/issues/5086 | 1,400,216,975 | I_kwDODunzps5TdZ2P | 5,086 | HTTPError: 404 Client Error: Not Found for url | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2022-10-06T19:48:58Z | 2022-10-07T15:12:01Z | 2022-10-07T15:12:01Z | null | ## Describe the bug
I was following chap 5 from huggingface course: https://huggingface.co/course/chapter5/6?fw=tf
However, I'm not able to download the datasets, with a 404 erros
<img width="1160" alt="iShot2022-10-06_15 54 50" src="https://user-images.githubusercontent.com/54015474/194406327-ae62c2f3-1da5-4686-8631-13d879a0edee.png">
## Steps to reproduce the bug
```python
from huggingface_hub import hf_hub_url
data_files = hf_hub_url(
repo_id="lewtun/github-issues",
filename="datasets-issues-with-hf-doc-builder.jsonl",
repo_type="dataset",
)
from datasets import load_dataset
issues_dataset = load_dataset("json", data_files=data_files, split="train")
issues_dataset
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.5.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.12
- PyArrow version: 9.0.0
- Pandas version: 1.4.4
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5086/timeline | null | completed | null | null | false | [
"FYI @lewtun ",
"Hi @km5ar, thanks for reporting.\r\n\r\nThis should be fixed in the notebook:\r\n- the filename `datasets-issues-with-hf-doc-builder.jsonl` no longer exists on the repo; instead, current filename is `datasets-issues-with-comments.jsonl`\r\n- see: https://huggingface.co/datasets/lewtun/github-issues/tree/main\r\n\r\nAnyway, depending on your version of `datasets`, you can now use:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nissues_dataset = load_dataset(\"lewtun/github-issues\")\r\nissues_dataset\r\n```\r\ninstead of:\r\n```python\r\nfrom huggingface_hub import hf_hub_url\r\n\r\ndata_files = hf_hub_url(\r\n repo_id=\"lewtun/github-issues\",\r\n filename=\"datasets-issues-with-hf-doc-builder.jsonl\",\r\n repo_type=\"dataset\",\r\n)\r\nfrom datasets import load_dataset\r\n\r\nissues_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\nissues_dataset\r\n```\r\n\r\nOutput:\r\n```python\r\nIn [25]: ds = load_dataset(\"lewtun/github-issues\")\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10.5k/10.5k [00:00<00:00, 5.75MB/s]\r\nUsing custom data configuration lewtun--github-issues-cff5093ecc410ea2\r\nDownloading and preparing dataset json/lewtun--github-issues to .../.cache/huggingface/datasets/lewtun___json/lewtun--github-issues-cff5093ecc410ea2/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12.2M/12.2M [00:00<00:00, 26.5MB/s]\r\nDownloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.70s/it]\r\nExtracting data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1589.96it/s]\r\nDataset json downloaded and prepared to .../.cache/huggingface/datasets/lewtun___json/lewtun--github-issues-cff5093ecc410ea2/0.0.0/e6070c77f18f01a5ad4551a8b7edfba20b8438b7cad4d94e6ad9378022ce4aab. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 133.95it/s]\r\n\r\nIn [26]: ds\r\nOut[26]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['url', 'repository_url', 'labels_url', 'comments_url', 'events_url', 'html_url', 'id', 'node_id', 'number', 'title', 'user', 'labels', 'state', 'locked', 'assignee', 'assignees', 'milestone', 'comments', 'created_at', 'updated_at', 'closed_at', 'author_association', 'active_lock_reason', 'pull_request', 'body', 'timeline_url', 'performed_via_github_app', 'is_pull_request'],\r\n num_rows: 3019\r\n })\r\n})\r\n```",
"Thanks for reporting @km5ar and thank you @albertvillanova for the quick solution! I'll post a fix on the source too"
] |
https://api.github.com/repos/huggingface/datasets/issues/3231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3231/comments | https://api.github.com/repos/huggingface/datasets/issues/3231/events | https://github.com/huggingface/datasets/pull/3231 | 1,047,170,906 | PR_kwDODunzps4uNmWT | 3,231 | Group tests in multiprocessing workers by test file | [] | closed | false | null | 0 | 2021-11-08T08:46:03Z | 2021-11-08T13:19:18Z | 2021-11-08T08:59:44Z | null | By grouping tests by test file, we make sure that all the tests in `test_load.py` are sent to the same worker.
Therefore, the fixture `hf_token` will be called only once (and from the same worker).
Related to: #3200.
Fix #3219. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3231/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3231/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3231.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3231",
"merged_at": "2021-11-08T08:59:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3231.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3231"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2489 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2489/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2489/comments | https://api.github.com/repos/huggingface/datasets/issues/2489/events | https://github.com/huggingface/datasets/issues/2489 | 919,569,749 | MDU6SXNzdWU5MTk1Njk3NDk= | 2,489 | Allow latest pyarrow version once segfault bug is fixed | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2021-06-12T14:09:52Z | 2021-06-14T07:53:23Z | 2021-06-14T07:53:23Z | null | As pointed out by @symeneses (see https://github.com/huggingface/datasets/pull/2268#issuecomment-860048613), pyarrow has fixed the segfault bug present in version 4.0.0 (see https://issues.apache.org/jira/browse/ARROW-12568):
- it was fixed on 3 May 2021
- version 4.0.1 was released on 19 May 2021 with the bug fix | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2489/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2489/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3006/comments | https://api.github.com/repos/huggingface/datasets/issues/3006/events | https://github.com/huggingface/datasets/pull/3006 | 1,014,770,821 | PR_kwDODunzps4snsBm | 3,006 | Fix Windows paths in CommonLanguage dataset | [] | closed | false | null | 0 | 2021-10-04T06:08:58Z | 2021-10-04T09:07:58Z | 2021-10-04T09:07:58Z | null | Minor fix in CommonLanguage dataset for Windows pathname component separator.
Related to #2989. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3006/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3006/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3006.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3006",
"merged_at": "2021-10-04T09:07:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3006.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3006"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/626 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/626/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/626/comments | https://api.github.com/repos/huggingface/datasets/issues/626/events | https://github.com/huggingface/datasets/pull/626 | 701,352,605 | MDExOlB1bGxSZXF1ZXN0NDg2ODIzMTY1 | 626 | Update GLUE URLs (now hosted on FB) | [] | closed | false | null | 0 | 2020-09-14T19:05:39Z | 2020-09-16T06:53:18Z | 2020-09-16T06:53:18Z | null | NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112.
Note: rebased on huggingface/datasets | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/626/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/626/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/626.diff",
"html_url": "https://github.com/huggingface/datasets/pull/626",
"merged_at": "2020-09-16T06:53:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/626.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/626"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4397/comments | https://api.github.com/repos/huggingface/datasets/issues/4397/events | https://github.com/huggingface/datasets/pull/4397 | 1,246,597,632 | PR_kwDODunzps44XcG3 | 4,397 | Fix dependency on dill version | [] | closed | false | null | 1 | 2022-05-24T13:54:23Z | 2022-10-26T08:45:37Z | 2022-05-25T13:54:08Z | null | We had to make a hotfix by pinning dill:
- #4380
because from version 0.3.5, our custom `save_function` pickling function was raising an exception:
- #4379
This PR fixes this by implementing our custom `save_function` depending on the version of dill.
CC: @anivegesana
This PR needs first being merged:
- [x] #4384
- so that a circular import is fixed
It is also convenient to merge first:
- [x] #4385 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4397/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4397/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4397.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4397",
"merged_at": "2022-05-25T13:54:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4397.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4397"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4552/comments | https://api.github.com/repos/huggingface/datasets/issues/4552/events | https://github.com/huggingface/datasets/pull/4552 | 1,282,615,646 | PR_kwDODunzps46QSHV | 4,552 | Tell users to upload on the hub directly | [] | closed | false | null | 2 | 2022-06-23T15:47:52Z | 2022-06-26T15:49:46Z | 2022-06-26T15:39:11Z | null | As noted in https://github.com/huggingface/datasets/pull/4534, it is still not clear that it is recommended to add datasets on the Hugging Face Hub directly instead of GitHub, so I updated some docs.
Moreover since users won't be able to get reviews from us on the Hub, I added a paragraph to tell users that they can open a discussion and tag `datasets` maintainers for reviews.
Finally I removed the _previous good reasons_ to add a dataset on GitHub to only keep this one:
> In some rare cases it makes more sense to open a PR on GitHub. For example when you are not the author of the dataset and there is no clear organization / namespace that you can put the dataset under.
Does it sound good to you @albertvillanova @julien-c ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4552/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4552/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4552.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4552",
"merged_at": "2022-06-26T15:39:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4552.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4552"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks ! I updated the two remaining files"
] |
https://api.github.com/repos/huggingface/datasets/issues/2700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2700/comments | https://api.github.com/repos/huggingface/datasets/issues/2700/events | https://github.com/huggingface/datasets/issues/2700 | 950,276,325 | MDU6SXNzdWU5NTAyNzYzMjU= | 2,700 | from datasets import Dataset is failing | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-07-22T03:51:23Z | 2021-07-22T07:23:45Z | 2021-07-22T07:09:07Z | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import Dataset
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in <module>()
25 import posixpath
26 import requests
---> 27 from tqdm.contrib.concurrent import thread_map
28
29 from .. import __version__, config, utils
ModuleNotFoundError: No module named 'tqdm.contrib.concurrent'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: latest version as of 07/21/2021
- Platform: Google Colab
- Python version: 3.7
- PyArrow version:
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2700/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2700/timeline | null | completed | null | null | false | [
"Hi @kswamy15, thanks for reporting.\r\n\r\nWe are fixing this critical issue and making an urgent patch release of the `datasets` library today.\r\n\r\nIn the meantime, you can circumvent this issue by updating the `tqdm` library: `!pip install -U tqdm`"
] |
https://api.github.com/repos/huggingface/datasets/issues/4015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4015/comments | https://api.github.com/repos/huggingface/datasets/issues/4015/events | https://github.com/huggingface/datasets/issues/4015 | 1,180,510,856 | I_kwDODunzps5GXSqI | 4,015 | Can not correctly parse the classes with imagefolder | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-03-25T08:51:17Z | 2022-03-28T01:02:03Z | 2022-03-25T09:27:56Z | null | ## Describe the bug
I try to load my own image dataset with imagefolder, but the parsing of classes is incorrect.
## Steps to reproduce the bug
I organized my dataset (ImageNet) in the following structure:
```
- imagenet/
- train/
- n01440764/
- ILSVRC2012_val_00000293.jpg
- ......
- n01695060/
- ......
- val/
- n01440764/
- n01695060/
- ......
```
At first, I followed the instructions from the Huggingface [example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/image-classification#using-your-own-data) to load my data as:
```
from datasets import load_dataset
data_files = {'train': 'imagenet/train', 'val': 'imagenet/val'}
ds = load_dataset("nateraw/image-folder", data_files=data_files, task="image-classification")
```
but it resulted following error (I mask my personal path as <PERSONAL_PATH>):
```
FileNotFoundError: Unable to find 'https://huggingface.co/datasets/nateraw/image-folder/resolve/main/imagenet/train' at <PERSONAL_PATH>/ImageNet/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main
```
Next, I followed a recent issue #3960 to load data as:
```
from datasets import load_dataset
data_files = {'train': ['imagenet/train/**'], 'val': ['imagenet/val/**']}
ds = load_dataset("imagefolder", data_files=data_files, task="image-classification")
```
and the data can be loaded without error as: (I copy val folder to train folder for illustration)
```
>>> ds
DatasetDict({
train: Dataset({
features: ['image', 'labels'],
num_rows: 50000
})
val: Dataset({
features: ['image', 'labels'],
num_rows: 50000
})
})
```
However, the parsed classes is wrong (should be 1000 classes):
```
>>> ds["train"].features
{'image': Image(decode=True, id=None), 'labels': ClassLabel(num_classes=1, names=['val'], id=None)}
```
## Expected results
I expect that the "labels" in ds["train"].features should contain 1000 classes.
## Actual results
The "labels" in ds["train"].features contains only 1 wrong class.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Ubuntu 18.04
- Python version: Python 3.7.12
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4015/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4015/timeline | null | completed | null | null | false | [
"I found that the problem arises because the image files in my folder are actually symbolic links (for my own reasons). After modifications, the classes can now be correctly parsed. Therefore, I close this issue.",
"HI, I have a question. How much time did you load the ImageNet data files? "
] |
https://api.github.com/repos/huggingface/datasets/issues/3475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3475/comments | https://api.github.com/repos/huggingface/datasets/issues/3475/events | https://github.com/huggingface/datasets/issues/3475 | 1,087,352,041 | I_kwDODunzps5Az6zp | 3,475 | The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 2 | 2021-12-23T03:56:43Z | 2021-12-24T00:23:03Z | null | null | ## Describe the bug
See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user.
## Steps to reproduce the bug
Go to the [dataset viewer](https://huggingface.co/datasets/viewer/?dataset=rotten_tomatoes) for the dataset, set the offset to 4160 for the train dataset, and scroll through the results. I found ones at index 4166 and 4173. There's others too (e.g. index 2888) but those two are easy to find like that.
## Expected results
English movie reviews only.
## Actual results
Example of a Spanish movie review (4173):
> "É uma pena que , mais tarde , o próprio filme abandone o tom de paródia e passe a utilizar os mesmos clichês que havia satirizado "
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3475/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3475/timeline | null | null | null | null | false | [
"Hi @puzzler10, thanks for reporting.\r\n\r\nPlease note this dataset is not hosted on Hugging Face Hub. See: \r\nhttps://github.com/huggingface/datasets/blob/c8f914473b041833fd47178fa4373cdcb56ac522/datasets/rotten_tomatoes/rotten_tomatoes.py#L42\r\n\r\nIf there are issues with the source data of a dataset, you should contact the data owners/creators instead. In the homepage associated with this dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/), you can find the authors of the dataset and how to contact them:\r\n> If you have any questions or comments regarding this site, please send email to Bo Pang or Lillian Lee.\r\n\r\nP.S.: Please also note that the example you gave of non-English review is in Portuguese (not Spanish). ;)",
"Maybe best to just put a quick sentence in the dataset description that highlights this? "
] |
https://api.github.com/repos/huggingface/datasets/issues/2462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2462/comments | https://api.github.com/repos/huggingface/datasets/issues/2462/events | https://github.com/huggingface/datasets/issues/2462 | 915,384,613 | MDU6SXNzdWU5MTUzODQ2MTM= | 2,462 | Merge DatasetDict and Dataset | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | {
"closed_at": null,
"closed_issues": 2,
"created_at": "2021-07-21T15:34:56Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-30T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"id": 6968069,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"open_issues": 4,
"state": "open",
"title": "1.12",
"updated_at": "2021-10-13T10:26:33Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8"
} | 0 | 2021-06-08T19:22:04Z | 2021-09-02T05:33:32Z | null | null | As discussed in #2424 and #2437 (please see there for detailed conversation):
- It would be desirable to improve UX with respect the confusion between DatasetDict and Dataset.
- The difference between Dataset and DatasetDict is an additional abstraction complexity that confuses "typical" end users.
- A user expects a "Dataset" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user.
Here is a proposal for discussion and refined (and potential abandon if it's not good enough):
- let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other
- let's disallow the use of integers in split names (probably not a very big breaking change)
- when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order)
- when you index with strings/split name you have the same behavior as now (full backward compat)
- let's then also have all the methods of a Dataset on the DatasetDict
The end goal would be to merge both Dataset and DatasetDict object in a single object that would be (pretty much totally) backward compatible with both.
There are a few things that we could discuss if we want to merge Dataset and DatasetDict:
1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature
```
from datasets import load_dataset
dataset = load_dataset(...)
dataset["train"]
dataset["input_ids"]
```
2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss.
Moreover regarding your points:
- integers are not allowed as split names already
- it's definitely doable to have all the methods. Maybe some of them like train_test_split that is currently only available for Dataset can be tweaked to work for a split dataset
cc: @thomwolf @lhoestq | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2462/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2462/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4960/comments | https://api.github.com/repos/huggingface/datasets/issues/4960/events | https://github.com/huggingface/datasets/issues/4960 | 1,368,035,159 | I_kwDODunzps5Rio9X | 4,960 | BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema' | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | open | false | null | 2 | 2022-09-09T16:06:43Z | 2022-09-13T08:51:03Z | null | null | ## Describe the bug
I am trying to load a dataset from drive and running into an error.
## Steps to reproduce the bug
```python
data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b"
bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir)
```
## Actual results
`AttributeError: 'BuilderConfig' object has no attribute 'schema'`
<details>
```
Using custom data configuration default-a1ca3e05be5abf2f
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [8], in <cell line: 2>()
1 data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b"
----> 2 bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir)
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1723, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1720 ignore_verifications = ignore_verifications or save_infos
1722 # Create a dataset builder
-> 1723 builder_instance = load_dataset_builder(
1724 path=path,
1725 name=name,
1726 data_dir=data_dir,
1727 data_files=data_files,
1728 cache_dir=cache_dir,
1729 features=features,
1730 download_config=download_config,
1731 download_mode=download_mode,
1732 revision=revision,
1733 use_auth_token=use_auth_token,
1734 **config_kwargs,
1735 )
1737 # Return iterable dataset in case of streaming
1738 if streaming:
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1526, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1523 raise ValueError(error_msg)
1525 # Instantiate the dataset builder
-> 1526 builder_instance: DatasetBuilder = builder_cls(
1527 cache_dir=cache_dir,
1528 config_name=config_name,
1529 data_dir=data_dir,
1530 data_files=data_files,
1531 hash=hash,
1532 features=features,
1533 use_auth_token=use_auth_token,
1534 **builder_kwargs,
1535 **config_kwargs,
1536 )
1538 return builder_instance
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:1154, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs)
1153 def __init__(self, *args, writer_batch_size=None, **kwargs):
-> 1154 super().__init__(*args, **kwargs)
1155 # Batch size used by the ArrowWriter
1156 # It defines the number of samples that are kept in memory before writing them
1157 # and also the length of the arrow chunks
1158 # None means that the ArrowWriter will use its default value
1159 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:307, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)
305 if info is None:
306 info = self.get_exported_dataset_info()
--> 307 info.update(self._info())
308 info.builder_name = self.name
309 info.config_name = self.config.name
File ~/.cache/huggingface/modules/datasets_modules/datasets/aps--bioasq_task_b/3d54b1213f7e8001eef755af92877f9efa44161ee83c2a70d5d649defa95759e/bioasq_task_b.py:477, in BioasqTaskBDataset._info(self)
474 def _info(self):
475
476 # BioASQ Task B source schema
--> 477 if self.config.schema == "source":
478 features = datasets.Features(
479 {
480 "id": datasets.Value("string"),
(...)
504 }
505 )
506 # simplified schema for QA tasks
AttributeError: 'BuilderConfig' object has no attribute 'schema'
```
</details>
## Environment info
- `datasets` version: 2.4.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.4
- PyArrow version: 9.0.0
- Pandas version: 1.4.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4960/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4960/timeline | null | null | null | null | false | [
"Following worked:\r\n\r\n```\r\ndata_dir = \"/Users/dlituiev/repos/datasets/bioasq/\"\r\nbioasq_task_b = load_dataset(\"aps/bioasq_task_b\", data_dir=data_dir, name=\"bioasq_9b_source\")\r\n```\r\n\r\nWould maintainers be open to one of the following:\r\n- automating this with a latest default config (e.g. `bioasq_9b_source`); how can this be generalized to other datasets?\r\n- providing an actionable error message that lists available `name` values? I only got available `name` values once I've provided something there (`name=\"aps/bioasq_task_b\"`), before it would not even mention that it requires `name` argument",
"Hi ! In general the list of available configurations is prompted. I think this is an issue with this specific dataset.\r\n\r\nFeel free to open a new discussions at https://huggingface.co/datasets/aps/bioasq_task_b/discussions\r\n\r\ncc @apsdehal\r\n\r\nIn particular it sounds like the `BUILDER_CONFIG_CLASS= BigBioConfig ` class attribute is missing and the _info should account for schema being None and raise an error"
] |
https://api.github.com/repos/huggingface/datasets/issues/1520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1520/comments | https://api.github.com/repos/huggingface/datasets/issues/1520/events | https://github.com/huggingface/datasets/pull/1520 | 764,140,938 | MDExOlB1bGxSZXF1ZXN0NTM4MzU5MTA5 | 1,520 | ru_reviews dataset adding | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 3 | 2020-12-12T18:13:06Z | 2022-10-03T09:38:42Z | 2022-10-03T09:38:42Z | null | RuReviews: An Automatically Annotated Sentiment Analysis Dataset for Product Reviews in Russian | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1520/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1520/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1520.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1520",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1520.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1520"
} | true | [
"Hi @lhoestq \r\n\r\nI have added the readme as well \r\n\r\nPlease do have a look at it when suitable ",
"Chatted with @darshan-gandhi on Slack about parsing examples into a separate text and sentiment field",
"Thanks for your contribution, @darshan-gandhi. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] |
https://api.github.com/repos/huggingface/datasets/issues/4338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4338/comments | https://api.github.com/repos/huggingface/datasets/issues/4338/events | https://github.com/huggingface/datasets/pull/4338 | 1,234,478,851 | PR_kwDODunzps43vwsm | 4,338 | Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full | [] | closed | false | null | 2 | 2022-05-12T21:02:08Z | 2022-05-16T15:51:02Z | 2022-05-16T15:42:59Z | null | Adding evaluation metadata for:
- Tweet Eval
- Tweets Hate Speech Detection
- VCTK
- Weibo NER
- Wisesight Sentiment
- XSum
- Yahoo Answers Topics
- Yelp Polarity
- Yelp Review Full | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4338/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4338/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4338.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4338",
"merged_at": "2022-05-16T15:42:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4338.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4338"
} | true | [
"Summary of CircleCI errors:\r\n\r\n- **XSum**: missing 6 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', and 'source_datasets'\r\n- **Yelp_polarity**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5032/comments | https://api.github.com/repos/huggingface/datasets/issues/5032/events | https://github.com/huggingface/datasets/issues/5032 | 1,388,270,935 | I_kwDODunzps5Sv1VX | 5,032 | new dataset type: single-label and multi-label video classification | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 6 | 2022-09-27T19:40:11Z | 2022-11-02T19:10:13Z | null | null | **Is your feature request related to a problem? Please describe.**
In my research, I am dealing with multi-modal (audio+text+frame sequence) video classification. It would be great if the datasets library supported generating multi-modal batches from a video dataset.
**Describe the solution you'd like**
Assume I have video files having single/multiple labels. I want to train a single/multi-label video classification model. I want datasets to support generating multi-modal batches (audio+frame sequence) from video files. Audio waveform and frame sequence can be extracted from each video clip then I can use any audio, image and video model from transformers library to extract features which will be fed into my model.
**Describe alternatives you've considered**
Currently, I am using https://github.com/facebookresearch/pytorchvideo dataloaders. There seems to be not much alternative.
**Additional context**
I am wiling to open a PR but don't know where to start.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5032/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5032/timeline | null | null | null | null | false | [
"Hi ! You can in the `features` folder how we implemented the audio and image feature types.\r\n\r\nWe can have something similar to videos. What we need to decide:\r\n- the video loading library to use\r\n- the output format when a user accesses a video type object\r\n- what parameters a `Video()` feature type needs\r\n\r\nalso cc @nateraw who also took a look at what we can do for video",
"@lhoestq @nateraw is there any progress on adding video classification datasets? ",
"Hi ! I think we just missing which lib we're going to use to decode the videos + which parameters must go in the `Video` type",
"Hmm. `decord` could be nice but it's no longer maintained [it seems](https://github.com/dmlc/decord/issues/214). ",
"pytorchvideo uses [pyav](https://github.com/PyAV-Org/PyAV) as the default decoder: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/labeled_video_dataset.py#L37\r\n\r\nAlso it would be great if `optionally` audio can also be decoded from the video as in pytorchvideo: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/labeled_video_dataset.py#L35\r\n\r\nHere are the other decoders supported in pytorchvideo: https://github.com/facebookresearch/pytorchvideo/blob/c8d23d8b7e597586a9e2d18f6ed31ad8aa379a7a/pytorchvideo/data/encoded_video.py#L17\r\n",
"@sayakpaul I did do quite a bit of work on [this PR](https://github.com/huggingface/datasets/pull/4532) a while back to add a video feature. It's outdated, but uses my `encoded_video` [package](https://github.com/nateraw/encoded-video) under the hood, which is basically a wrapper around PyAV stolen from [pytorchvideo](https://github.com/facebookresearch/pytorchvideo/) that gets rid of the `torch` dependency. \r\n\r\nwould be really great to get something like this in...it's just a really tricky and time consuming feature to add. "
] |
https://api.github.com/repos/huggingface/datasets/issues/389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/389/comments | https://api.github.com/repos/huggingface/datasets/issues/389/events | https://github.com/huggingface/datasets/pull/389 | 656,921,768 | MDExOlB1bGxSZXF1ZXN0NDQ5MTMyOTU5 | 389 | Fix pickling of SplitDict | [] | closed | false | null | 11 | 2020-07-14T21:53:39Z | 2020-08-04T14:38:10Z | 2020-08-04T14:38:10Z | null | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, 'sentencized_wiki_dataset.pt')
```
However, upon unpickling the dataset via torch.load(...), this error is raised:
```
ValueError("Cannot add elem. Use .add() instead.")
```
On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class.
The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`.
Testing:
- Manually pickled and unpickled a modified wikipedia dataset.
- Ran `make style`
I would be happy to run any other tests, but I couldn't find any in the contributing guidelines. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/389/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/389/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/389.diff",
"html_url": "https://github.com/huggingface/datasets/pull/389",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/389.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/389"
} | true | [
"By the way, the reason this is an issue for me is because I want to be able to \"save\" changes made to a dataset by writing something to disk. In this case, I would like to pre-process my dataset once, and then train multiple models on the dataset later without having to re-process the data. \r\n\r\nIs pickling/unpickling the Dataset object the \"sanctioned\" way of doing this? Or is there a better way that I'm missing?",
"I've had success with saving datasets to disk via:\r\n\r\n```python\r\ncache_file = \"/my/dset.cache\"\r\ndset = dset.map(whatever, cache_file_name=cache_file)\r\n# then, later\r\ndset = nlp.Dataset.from_file(cache_file)\r\n```\r\n\r\nThis restores the dataset with all the attributes I need.",
"Thanks @jarednielsen, that makes sense. I'm a little wary of messing with the cache files, since I still don't really understand what's going on under the hood with Apache Arrow. \r\n\r\nRelated question: I'd like to do parallel pre-processing of the dataset. I know how to break the dataset up via sharding, but is there any way to combine the shards back together again once the processing is done? Right now I'm probably just going to iterate over each shard, write the contexts to a txt file, and then cat the txt files, but it feels like there ought to be a nicer way to concatenate datasets.",
"Haha, opened a PR for that functionality about an hour ago: https://github.com/huggingface/nlp/pull/390. Glad we're on the same page :)",
"Datasets are not supposed to be pickled as pickle tries to put all the dataset in memory if I'm not wrong (and write all the data on disk).\r\nThe concatenate method however is a very cool feature, looking forward to having it merged :)",
"Ah, yes, you are correct. The pickle file contains the whole dataset, not just the cache names, which is not quite what I expected.\r\n\r\nI tried adding a warning when pickling a Dataset, to prevent others like me from trying it. Interestingly, however, the warning is raised whenever any function on the dataset is called (select, shard, etc.). \r\n\r\n```\r\nimport nlp\r\nwiki = nlp.load_dataset('wikipedia', split='train')\r\nwiki = wiki.shard(16, 0) # Triggers pickling of dataset\r\n```\r\n\r\nI believe this is because [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626), which gets the function signature, is actually pickling the whole dataset (and thereby serializing all the data to text). I checked by printing that string, and sure enough it was full of Wikipedia articles.\r\n\r\nI don't think the whole pickling thing is worth the effort, so I'll close the PR. But I did want to mention this serialization behavior in case it's not intended.",
"Thanks for reporting. Indeed this line shouldn't serialize the data but only the function itself.\r\n",
"Keeping this open because I would like to keep brainstorming a bit on this.\r\n\r\nOne note on this is that we should have a clean serialization workflow, probably one that could serialize to a few formats (arrow, parquet and tfrecords come to mind).",
"This PR could be useful. My specific use case is `multiprocessing.Pool` for parallel preprocessing (because of the Python tokenization bottleneck at https://github.com/huggingface/transformers/issues/5729). I shard a large dataset, run map on each shard within a multiprocessing pool, and then concatenate them back together. This is only possible if a dataset can be pickled, otherwise the logic is much more complex. There's no reason to make it un-picklable, even if it's not the recommended usage.\r\n\r\n```python\r\nimport nlp\r\nimport multiprocessing\r\n\r\ndef func(ex):\r\n return {\"text\": \"Prefix: \" + ex[\"text\"]}\r\n\r\ndef map_helper(dset):\r\n return dset.map(func)\r\n\r\nn_shards = 16\r\ndset = nlp.load_dataset(\"wikitext-2-raw-v1\", split=\"train\")\r\nwith multiprocessing.Pool(processes=n_shards) as pool:\r\n shards = pool.map(map_helper, [dset.shard(n_shards, i, contiguous=True) for i in range(n_shards)])\r\ndset = nlp.concatenate_datasets(shards)\r\n```\r\n",
"Yes I agree.\r\n#423 just got merged and should allow serialization of `SplitDict`. Could you try it and see if it'ok on your side now ?",
"Closing this, assuming it was fixed in #423."
] |
https://api.github.com/repos/huggingface/datasets/issues/3422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3422/comments | https://api.github.com/repos/huggingface/datasets/issues/3422/events | https://github.com/huggingface/datasets/issues/3422 | 1,078,022,619 | I_kwDODunzps5AQVHb | 3,422 | Error about load_metric | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-12-13T02:49:51Z | 2022-01-07T14:06:47Z | 2022-01-07T14:06:47Z | null | ## Describe the bug
File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1371, in load_metric
metric = metric_cls(
TypeError: 'NoneType' object is not callable
## Steps to reproduce the bug
```python
metric = load_metric("glue", "sst2")
```
## Environment info
- `datasets` version: 1.16.1
- Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3422/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3422/timeline | null | completed | null | null | false | [
"Hi ! I wasn't able to reproduce your error.\r\n\r\nCan you try to clear your cache at `~/.cache/huggingface/modules` and try again ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/1344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1344/comments | https://api.github.com/repos/huggingface/datasets/issues/1344/events | https://github.com/huggingface/datasets/pull/1344 | 759,831,925 | MDExOlB1bGxSZXF1ZXN0NTM0NzY2ODIy | 1,344 | Add hausa ner corpus | [] | closed | false | null | 0 | 2020-12-08T22:25:04Z | 2020-12-08T23:11:55Z | 2020-12-08T23:11:55Z | null | Added Hausa VOA NER data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1344/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1344/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1344.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1344",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1344.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1344"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4585/comments | https://api.github.com/repos/huggingface/datasets/issues/4585/events | https://github.com/huggingface/datasets/pull/4585 | 1,287,064,929 | PR_kwDODunzps46e1Ne | 4,585 | Host multi_news data on the Hub instead of Google Drive | [] | closed | false | null | 1 | 2022-06-28T09:32:06Z | 2022-06-28T14:19:35Z | 2022-06-28T14:08:48Z | null | Host data files of multi_news dataset on the Hub.
They were on Google Drive.
Fix #4580. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4585/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4585.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4585",
"merged_at": "2022-06-28T14:08:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4585.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4585"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3127/comments | https://api.github.com/repos/huggingface/datasets/issues/3127/events | https://github.com/huggingface/datasets/issues/3127 | 1,032,100,613 | I_kwDODunzps49hJsF | 3,127 | datasets-cli: convertion of a tfds dataset to a huggingface one. | [] | open | false | null | 1 | 2021-10-21T06:14:27Z | 2021-10-27T11:36:05Z | null | null | ### Discussed in https://github.com/huggingface/datasets/discussions/3079
<div type='discussions-op-text'>
<sup>Originally posted by **vitalyshalumov** October 14, 2021</sup>
I'm trying to convert a tfds dataset to a huggingface one.
I've tried:
1. datasets-cli convert --tfds_path ~/tensorflow_datasets/mnist/3.0.1/ --datasets_directory ~/.cache/huggingface/datasets/mnist/3.0.1/
2. datasets-cli convert --tfds_path ~/tensorflow_datasets/mnist/3.0.1/ --datasets_directory ~/.cache/huggingface/datasets/
and other permutations.
The script appears to be running and finishing without an error but when looking in the huggingface/datasets/ folder nothing is created.
</div> | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3127/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3127/timeline | null | null | null | null | false | [
"Hi,\r\n\r\nthe MNIST dataset is already available on the Hub. You can use it as follows:\r\n```python\r\nimport datasets\r\ndataset_dict = datasets.load_dataset(\"mnist\")\r\n```\r\n\r\nAs for the conversion of TFDS datasets to HF datasets, we will be working on it in the coming months, so stay tuned."
] |
https://api.github.com/repos/huggingface/datasets/issues/540 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/540/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/540/comments | https://api.github.com/repos/huggingface/datasets/issues/540/events | https://github.com/huggingface/datasets/pull/540 | 688,475,884 | MDExOlB1bGxSZXF1ZXN0NDc1NzMzNzMz | 540 | [BUGFIX] Fix Race Dataset Checksum bug | [] | closed | false | null | 4 | 2020-08-29T07:00:10Z | 2020-09-18T11:42:20Z | 2020-09-18T11:42:20Z | null | In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/540/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/540/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/540.diff",
"html_url": "https://github.com/huggingface/datasets/pull/540",
"merged_at": "2020-09-18T11:42:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/540.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/540"
} | true | [
"I'm not sure this would fix #537 .\r\nHowever your point about the missing `middle` data is right and we probably want to include these data as well.\r\nDo you think it would we worth having different configurations for this dataset for users who want to only load part of it (`high school` or `middle` or `all`) ?",
"This has fixed #537 at least on my machine hahaha.\r\n\r\nNice point! I think it would totally worth it :) What the best implementation approach would you suggest?\r\n\r\nWould it be possible to have `high school`, `middle` and `all` inside each portion of `train`, `validation` and `test`? Would this make sense?",
"I think we could have one dataset configuration for `high school`, one for `middle` and one for `all`.\r\nYou just need to add\r\n```python\r\n BUILDER_CONFIGS = [\r\n nlp.BuilderConfig(\r\n name=\"high school\",\r\n description=\"insert description here\",\r\n ),\r\n nlp.BuilderConfig(\r\n name=\"middle\",\r\n description=\"insert description here\",\r\n ),\r\n nlp.BuilderConfig(\r\n name=\"all\",\r\n description=\"insert description here\",\r\n ),\r\n ]\r\n```\r\nas a class attribute for the `Race` class.\r\n\r\nThen in `generate_examples` you can check the value of `self.config.name` and choose which files to include when generating examples.\r\n\r\nYou can check [mlsum](https://github.com/huggingface/nlp/blob/master/datasets/mlsum/mlsum.py) for example if you want to see how it done in general, it's a dataset that has five configurations, and each config has train/val/test splits.",
"Hi @lhoestq sorry for the delay in addressing your comments. Thanks for your assistance :)\r\n\r\nYou were correct as well, as I was using the script without the `datasets/race/dataset_infos.json` file, it did not verify the checksum. I already fix it as well :)\r\n\r\nI managed to get everything running smoothly by now. Please let me know if you think that I could improve my solution"
] |
https://api.github.com/repos/huggingface/datasets/issues/5162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5162/comments | https://api.github.com/repos/huggingface/datasets/issues/5162/events | https://github.com/huggingface/datasets/issues/5162 | 1,422,461,112 | I_kwDODunzps5UyQi4 | 5,162 | Pip-compile: Could not find a version that matches dill<0.3.6,>=0.3.6 | [] | closed | false | null | 7 | 2022-10-25T13:23:50Z | 2022-11-14T08:25:37Z | 2022-10-28T05:38:15Z | null | ### Describe the bug
When using `pip-compile` (part of `pip-tools`) to generate a pinned requirements file that includes `datasets`, a version conflict of `dill` appears.
It is caused by a transitive dependency conflict between `datasets` and `multiprocess`.
### Steps to reproduce the bug
```bash
$ echo "datasets" > requirements.in
$ pip install pip-tools
$ pip-compile requirements.in
Could not find a version that matches dill<0.3.6,>=0.3.6 (from datasets==2.6.1->-r requirements.in (line 1))
Tried: 0.2, 0.2, 0.2.1, 0.2.1, 0.2.2, 0.2.2, 0.2.3, 0.2.3, 0.2.4, 0.2.4, 0.2.5, 0.2.5, 0.2.6, 0.2.7, 0.2.7.1, 0.2.8, 0.2.8.1, 0.2.8.2, 0.2.9, 0.3.0, 0.3.1, 0.3.1.1, 0.3.2, 0.3.3, 0.3.3, 0.3.4, 0.3.4, 0.3.5, 0.3.5, 0.3.5.1, 0.3.5.1, 0.3.6, 0.3.6
Skipped pre-versions: 0.1a1, 0.2a1, 0.2a1, 0.2b1, 0.2b1
There are incompatible versions in the resolved dependencies:
dill<0.3.6 (from datasets==2.6.1->-r requirements.in (line 1))
dill>=0.3.6 (from multiprocess==0.70.14->datasets==2.6.1->-r requirements.in (line 1))
```
### Expected behavior
A correctly generated file `requirements.txt` with pinned dependencies
### Environment info
Tested with versions `2.6.1, 2.6.0, 2.5.2` on Python 3.8 and 3.10 on Ubuntu 20.04LTS and Python 3.10 on MacOS 12.6 (M1). | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5162/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5162/timeline | null | completed | null | null | false | [
"Thanks for reporting, @Rijgersberg.\r\n\r\nWe were waiting for the release of `dill` 0.3.6, that happened 2 days ago (24 Oct 2022): https://github.com/uqfoundation/dill/releases/tag/dill-0.3.6\r\n- See comment: https://github.com/huggingface/datasets/pull/4397#discussion_r880629543\r\n\r\nAlso `multiprocess` 0.70.14 was released 2 days ago: https://github.com/uqfoundation/multiprocess/releases/tag/multiprocess-0.70.14\r\n\r\nWe are addressing this issue to align dependencies.",
"In your specific setup, I guess the compatible configuration is with `multiprocess` 0.70.13 (instead of 0.70.14).",
"@Rijgersberg this issue is fixed. It will be available in our next `datasets` release.",
"Thanks!",
"> @Rijgersberg this issue is fixed. It will be available in our next `datasets` release.\n\nAny chance you have a eta? ",
"@StefanSamba we are disussing about making a release early this week.",
"@Rijgersberg, please also that you can make `pip-compile` work by using the backtracking resolver (instead of the legacy one): https://pip-tools.readthedocs.io/en/latest/#a-note-on-resolvers\r\n```\r\npip-compile --resolver=backtracking requirements.in\r\n```\r\nThis resolver will automatically use `multiprocess` 0.70.13 version. "
] |
https://api.github.com/repos/huggingface/datasets/issues/3226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3226/comments | https://api.github.com/repos/huggingface/datasets/issues/3226/events | https://github.com/huggingface/datasets/pull/3226 | 1,046,584,518 | PR_kwDODunzps4uL0ma | 3,226 | Fix paper BibTeX citation with proceedings reference | [] | closed | false | null | 0 | 2021-11-06T19:52:59Z | 2021-11-07T07:05:28Z | 2021-11-07T07:05:27Z | null | Fix paper BibTeX citation with proceedings reference. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3226/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3226/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3226.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3226",
"merged_at": "2021-11-07T07:05:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3226.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3226"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4893/comments | https://api.github.com/repos/huggingface/datasets/issues/4893/events | https://github.com/huggingface/datasets/issues/4893 | 1,350,655,674 | I_kwDODunzps5QgV66 | 4,893 | Oversampling strategy for iterable datasets in `interleave_datasets` | [
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | closed | false | null | 9 | 2022-08-25T10:06:55Z | 2022-10-03T12:37:46Z | 2022-10-03T12:37:46Z | null | In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects.
It would be nice to expand `interleave_datasets` for iterable datasets as well to support this oversampling strategy
```python
>>> from datasets.iterable_dataset import IterableDataset, ExamplesIterable
>>> d1 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [0, 1, 2]], {}))
>>> d2 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [10, 11, 12, 13]], {}))
>>> d3 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [20, 21, 22, 23, 24]], {}))
>>> dataset = interleave_datasets([d1, d2, d3]) # is supported
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") # is not supported yet
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]
```
This can be implemented by adding the strategy to both `CyclingMultiSourcesExamplesIterable` and `RandomlyCyclingMultiSourcesExamplesIterable` used in `_interleave_iterable_datasets` in `iterable_dataset.py`
I would be happy to share some guidance if anyone would like to give it a shot :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4893/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4893/timeline | null | completed | null | null | false | [
"Hi @lhoestq,\r\nI plunged into the code and it should be manageable for me to work on it!\r\n#take\r\n\r\nAlso, setting `d1`, `d2` and `d3` as you did raised a `SyntaxError: 'yield' inside list comprehension` for me, on Python 3.8.10.\r\nThe following snippet works for me though:\r\n```\r\nd1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\nd2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\nd3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n```\r\n\r\n",
"Great @ylacombe thanks ! I'm assigning you this issue",
"Hi @ylacombe :) Is there anything I can do to help ? Feel free to ping me if you have any question :)",
"Hi @lhoestq,\r\n\r\nI actually have already wrote the code last time [on this commit](https://github.com/ylacombe/datasets/commit/84769db97facc78a33ec53f7b1b395951e1804df) but I still have to change the docs and write some tests though. I'm working on it.\r\n\r\nHowever, I still your advice on one matter. \r\nIn #4831, when using a `Dataset` list with probabilities, I had change the original behavior so that it stops as soon as one or all datasets are out of samples. By nature, this behavior can't be applied with an `IterableDataset` because one only knows an iterable dataset is out of sample when receiving a StopIteration error after calling the iterator once again. \r\nTo sum up, as it is right know, the behavior is not consistent with an `IterableDataset` list or a `Dataset` list, when using probabilities.\r\nTo be honest, I think that the current behavior with a `Dataset` list is desirable and avoid having too many samples, so I would recommand keeping that as it is, but I can understand the desire to have the same behavior for both classes. \r\nWhat do you think ? Please let me know if you need more details.\r\n\r\n\r\nEDIT:\r\nHere is an example:\r\n```\r\n>>> from tests.test_iterable_dataset import *\r\n>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\n>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\n>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\r\n>>> [x[\"a\"] for x in dataset]\r\n[10, 0, 11, 1, 2, 20, 12, 13]\r\n>>> from tests.test_arrow_dataset import *\r\n>>> d1 = Dataset.from_dict({\"a\": [0, 1, 2]})\r\n>>> d2 = Dataset.from_dict({\"a\": [10, 11, 12]})\r\n>>> d3 = Dataset.from_dict({\"a\": [20, 21, 22]})\r\n>>> interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)[\"a\"]\r\n[10, 0, 11, 1, 2]\r\n[10, 0, 11, 1, 2]\r\n```\r\n ",
"Hi ! Awesome :) \r\n\r\nMaybe you can pre-load the next sample to know if the dataset is empty or not ?\r\nThis way it should be possible to have the same behavior for `IterableDataset`",
"Hi @ylacombe let us know if we can help with anything :)",
"Hi @lhoestq, I've finally made some advances in the matter. I've modified the `IterableDataset` behavior so that it aligns with the `Dataset` behavior as we have discussed. The documentation has been dealt with too. \r\nIt works as expected on my examples. However I'm having trouble figuring out how to test `interleave_datasets` on `test_iterable_datasets.py` as I have never worked with pytest. Could you help me on that or give me some indications? \r\n",
"Thanks @ylacombe :)\r\n\r\nUsing the `pytest` command, you can run all the functions in a python file that start with \"test_*\" and make sure they return not errors:\r\n```\r\npytest tests/test_iterable_dataset.py\r\n```\r\n\r\nIn our case it can be nice to define a `test_interleave_datasets_with_oversampling` function. This function can contain the code example that we mentioned earlier in this github issue to make sure it works as expected.",
"Resolved via #5036."
] |
https://api.github.com/repos/huggingface/datasets/issues/1594 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1594/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1594/comments | https://api.github.com/repos/huggingface/datasets/issues/1594/events | https://github.com/huggingface/datasets/issues/1594 | 769,747,767 | MDU6SXNzdWU3Njk3NDc3Njc= | 1,594 | connection error | [] | closed | false | null | 4 | 2020-12-17T09:18:34Z | 2022-06-01T15:33:42Z | 2022-06-01T15:33:41Z | null | Hi
I am hitting to this error, thanks
```
> Traceback (most recent call last):
File "finetune_t5_trainer.py", line 379, in <module>
main()
File "finetune_t5_trainer.py", line 208, in main
if training_args.do_eval or training_args.evaluation_strategy != EvaluationStrategy.NO
File "finetune_t5_trainer.py", line 207, in <dictcomp>
for task in data_args.eval_tasks}
File "/workdir/seq2seq/data/tasks.py", line 70, in get_dataset
dataset = self.load_dataset(split=split)
File "/workdir/seq2seq/data/tasks.py", line 66, in load_dataset
return datasets.load_dataset(self.task.name, split=split, script_version="master")
File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 487, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/boolq/boolq.py
el/0 I1217 01:11:33.898849 354161 main shadow.py:210 Current job status: FINISHED
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1594/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1594/timeline | null | completed | null | null | false | [
"This happen quite often when they are too many concurrent requests to github.\r\n\r\ni can understand it’s a bit cumbersome to handle on the user side. Maybe we should try a few times in the lib (eg with timeout) before failing, what do you think @lhoestq ?",
"Yes currently there's no retry afaik. We should add retries",
"Retries were added in #1603 :) \r\nIt will be available in the next release",
"Hi @lhoestq thank you for the modification, I will use`script_version=\"master\"` for now :), to my experience, also setting timeout to a larger number like 3*60 which I normally use helps a lot on this.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2793/comments | https://api.github.com/repos/huggingface/datasets/issues/2793/events | https://github.com/huggingface/datasets/pull/2793 | 968,967,773 | MDExOlB1bGxSZXF1ZXN0NzExMDQ4NDY2 | 2,793 | Fix type hint for data_files | [] | closed | false | null | 0 | 2021-08-12T14:42:37Z | 2021-08-12T15:35:29Z | 2021-08-12T15:35:29Z | null | Fix type hint for `data_files` in signatures and docstrings. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2793/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2793/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2793.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2793",
"merged_at": "2021-08-12T15:35:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2793.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2793"
} | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.