url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
β | active_lock_reason
null | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/906/comments | https://api.github.com/repos/huggingface/datasets/issues/906/events | https://github.com/huggingface/datasets/pull/906 | 752,403,395 | MDExOlB1bGxSZXF1ZXN0NTI4NzM0MDY0 | 906 | Fix url with backslash in windows for blimp and pg19 | [] | closed | false | null | 0 | 2020-11-27T17:59:11Z | 2020-11-27T18:19:56Z | 2020-11-27T18:19:56Z | null | Following #903 I also fixed blimp and pg19 which were using the `os.path.join` to create urls
cc @albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/906/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/906/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/906.diff",
"html_url": "https://github.com/huggingface/datasets/pull/906",
"merged_at": "2020-11-27T18:19:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/906.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/906"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2491 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2491/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2491/comments | https://api.github.com/repos/huggingface/datasets/issues/2491/events | https://github.com/huggingface/datasets/pull/2491 | 919,714,506 | MDExOlB1bGxSZXF1ZXN0NjY4OTg5MTUw | 2,491 | add eduge classification dataset | [] | closed | false | null | 1 | 2021-06-13T04:37:01Z | 2021-06-13T05:06:48Z | 2021-06-13T05:06:38Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2491/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2491/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2491.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2491",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2491.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2491"
} | true | [
"Closing this PR as I'll submit a new one - bug free"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/2856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2856/comments | https://api.github.com/repos/huggingface/datasets/issues/2856/events | https://github.com/huggingface/datasets/pull/2856 | 983,876,734 | MDExOlB1bGxSZXF1ZXN0NzIzMzg2NzIw | 2,856 | fix: π remove URL's query string only if it's ?dl=1 | [] | closed | false | null | 0 | 2021-08-31T13:40:07Z | 2021-08-31T14:22:12Z | 2021-08-31T14:22:12Z | null | A lot of URL use the query strings, for example
http://opus.nlpl.eu/download.php?f=Bianet/v1/moses/en-ku.txt.zip, we
must not remove it when trying to detect the protocol. We thus remove it
only in the case of the query string being ?dl=1 which occurs on dropbox
and dl.orangedox.com. Also: add unit tests.
See https://github.com/huggingface/datasets/pull/2843 for the original
discussion. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2856/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2856/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2856.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2856",
"merged_at": "2021-08-31T14:22:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2856.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2856"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1160/comments | https://api.github.com/repos/huggingface/datasets/issues/1160/events | https://github.com/huggingface/datasets/pull/1160 | 757,677,188 | MDExOlB1bGxSZXF1ZXN0NTMzMDE0Nzcw | 1,160 | adding TabFact dataset | [] | closed | false | null | 2 | 2020-12-05T13:05:52Z | 2020-12-09T11:41:39Z | 2020-12-09T09:12:41Z | null | Adding TabFact: A Large-scale Dataset for Table-based Fact Verification.
https://github.com/wenhuchen/Table-Fact-Checking
- The tables are stored as individual csv files, so need to download 16,573 π€― csv files. As a result the `datasets_infos.json` file is huge (6.62 MB).
- Original dataset has nested structure where, where table is one example and each table has multiple statements,
flattening the structure here so that each statement is one example. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1160/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1160/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1160.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1160",
"merged_at": "2020-12-09T09:12:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1160.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1160"
} | true | [
"FYI you guys are on GitHub's homepage π\r\n\r\n<img width=\"1589\" alt=\"Screenshot 2020-12-09 at 12 34 28\" src=\"https://user-images.githubusercontent.com/326577/101624883-a0ecc700-39e8-11eb-8a97-11af0d036536.png\">\r\n",
"Yeayy π π₯"
] |
https://api.github.com/repos/huggingface/datasets/issues/237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/237/comments | https://api.github.com/repos/huggingface/datasets/issues/237/events | https://github.com/huggingface/datasets/issues/237 | 631,199,940 | MDU6SXNzdWU2MzExOTk5NDA= | 237 | Can't download MultiNLI | [] | closed | false | null | 3 | 2020-06-04T23:05:21Z | 2020-06-06T10:51:34Z | 2020-06-06T10:51:34Z | null | When I try to download MultiNLI with
```python
dataset = load_dataset('multi_nli')
```
I get this long error:
```python
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-13-3b11f6be4cb9> in <module>
1 # Load a dataset and print the first examples in the training set
2 # nli_dataset = nlp.load_dataset('multi_nli')
----> 3 dataset = load_dataset('multi_nli')
4 # nli_dataset = nlp.load_dataset('multi_nli', split='validation_matched[:10%]')
5 # print(nli_dataset['train'][0])
~\Miniconda3\envs\nlp\lib\site-packages\nlp\load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
514
515 # Download and prepare data
--> 516 builder_instance.download_and_prepare(
517 download_config=download_config,
518 download_mode=download_mode,
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
417 with utils.temporary_assignment(self, "_cache_dir", tmp_data_dir):
418 verify_infos = not save_infos and not ignore_verifications
--> 419 self._download_and_prepare(
420 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
421 )
~\Miniconda3\envs\nlp\lib\site-packages\nlp\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
455 split_dict = SplitDict(dataset_name=self.name)
456 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 457 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
458 # Checksums verification
459 if verify_infos:
~\Miniconda3\envs\nlp\lib\site-packages\nlp\datasets\multi_nli\60774175381b9f3f1e6ae1028229e3cdb270d50379f45b9f2c01008f50f09e6b\multi_nli.py in _split_generators(self, dl_manager)
99 def _split_generators(self, dl_manager):
100
--> 101 downloaded_dir = dl_manager.download_and_extract(
102 "http://storage.googleapis.com/tfds-data/downloads/multi_nli/multinli_1.0.zip"
103 )
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in download_and_extract(self, url_or_urls)
214 extracted_path(s): `str`, extracted paths of given URL(s).
215 """
--> 216 return self.extract(self.download(url_or_urls))
217
218 def get_recorded_sizes_checksums(self):
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in extract(self, path_or_paths)
194 path_or_paths.
195 """
--> 196 return map_nested(
197 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
198 )
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\py_utils.py in map_nested(function, data_struct, dict_only, map_tuple)
168 return tuple(mapped)
169 # Singleton
--> 170 return function(data_struct)
171
172
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\download_manager.py in <lambda>(path)
195 """
196 return map_nested(
--> 197 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths,
198 )
199
~\Miniconda3\envs\nlp\lib\site-packages\nlp\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
231 if is_zipfile(output_path):
232 with ZipFile(output_path, "r") as zip_file:
--> 233 zip_file.extractall(output_path_extracted)
234 zip_file.close()
235 elif tarfile.is_tarfile(output_path):
~\Miniconda3\envs\nlp\lib\zipfile.py in extractall(self, path, members, pwd)
1644
1645 for zipinfo in members:
-> 1646 self._extract_member(zipinfo, path, pwd)
1647
1648 @classmethod
~\Miniconda3\envs\nlp\lib\zipfile.py in _extract_member(self, member, targetpath, pwd)
1698
1699 with self.open(member, pwd=pwd) as source, \
-> 1700 open(targetpath, "wb") as target:
1701 shutil.copyfileobj(source, target)
1702
OSError: [Errno 22] Invalid argument: 'C:\\Users\\Python\\.cache\\huggingface\\datasets\\3e12413b8ec69f22dfcfd54a79d1ba9e7aac2e18e334bbb6b81cca64fd16bffc\\multinli_1.0\\Icon\r'
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/237/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/237/timeline | null | completed | null | null | false | [
"You should use `load_dataset('glue', 'mnli')`",
"Thanks! I thought I had to use the same code displayed in the live viewer:\r\n```python\r\n!pip install nlp\r\nfrom nlp import load_dataset\r\ndataset = load_dataset('multi_nli', 'plain_text')\r\n```\r\nYour suggestion works, even if then I got a different issue (#242). ",
"Glad it helps !\nThough I am not one of hf team, but maybe you should close this issue first."
] |
https://api.github.com/repos/huggingface/datasets/issues/4852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4852/comments | https://api.github.com/repos/huggingface/datasets/issues/4852/events | https://github.com/huggingface/datasets/issues/4852 | 1,339,450,991 | I_kwDODunzps5P1mZv | 4,852 | Bug in multilingual_with_para config of exams dataset and checksums error | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-08-15T20:14:52Z | 2022-09-16T09:50:55Z | 2022-08-16T06:29:07Z | null | ## Describe the bug
There is a bug for "multilingual_with_para" config in exams dataset:
```python
ds = load_dataset("./datasets/exams", split="train")
```
raises:
```
KeyError: 'choices'
```
Moreover, there is a NonMatchingChecksumError:
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/train_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/dev_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/test_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/test_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_bg_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_bg_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_hr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_hr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_hu_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_hu_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_it_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_it_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_mk_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_mk_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_pl_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_pl_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_pt_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_pt_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_sq_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_sq_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_sr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_sr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_tr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_tr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_vi_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_vi_with_para.jsonl.tar.gz']
```
CC: @thesofakillers | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4852/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4852/timeline | null | completed | null | null | false | [
"Hi @albertvillanova. Unfortunately I still get this error. Is this because the merge has yet to be released? Is there a way to track the release?",
"Hi @thesofakillers, yes you are right: the fix will be available after next release (it was planned for today; Monday at the latest).\r\n\r\nIn the meantime, you can use the version of the `exams` on our main branch by passing `revision` to `load_dataset`:\r\n```python\r\nds = load_dataset(\"exams\", revision=\"main\")\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/1381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1381/comments | https://api.github.com/repos/huggingface/datasets/issues/1381/events | https://github.com/huggingface/datasets/pull/1381 | 760,320,960 | MDExOlB1bGxSZXF1ZXN0NTM1MTcyMjkw | 1,381 | Add twi text c3 | [] | closed | false | null | 6 | 2020-12-09T13:16:38Z | 2020-12-13T18:39:27Z | 2020-12-13T18:39:27Z | null | Added Twi texts for training embeddings and language models based on the paper https://www.aclweb.org/anthology/2020.lrec-1.335/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1381/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1381/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1381.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1381",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1381.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1381"
} | true | [
"looks like this PR includes changes about other datasets\r\n\r\nCan you only include the changes related to twi text c3 please ?",
"Hi @lhoestq , I have removed the unnecessary files. Can you please confirm?",
"You might need to either find a way to go back to the commit before it changes 389 files or create a new branch.",
"okay, I have created another branch, see the latest pull https://github.com/huggingface/datasets/pull/1518 @cstorm125 ",
"Hii please follow me",
"Closing this one in favor of #1518"
] |
https://api.github.com/repos/huggingface/datasets/issues/3628 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3628/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3628/comments | https://api.github.com/repos/huggingface/datasets/issues/3628/events | https://github.com/huggingface/datasets/issues/3628 | 1,113,930,644 | I_kwDODunzps5CZTuU | 3,628 | Dataset Card Creator drops information for "Additional Information" Section | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 0 | 2022-01-25T14:06:17Z | 2022-01-25T14:09:01Z | null | null | First of all, the card creator is a great addition and really helpful for streamlining dataset cards!
## Describe the bug
I encountered an inconvenient bug when entering "Additional Information" in the react app, which drops already entered text when switching to a previous section, and then back again to "Additional Information". I was able to reproduce the issue in both Firefox and Chrome, so I suspect a problem with the React logic that doesn't expect users to switch back in the final section.
Edit: I'm also not sure whether this is the right place to open the bug report on, since it's not clear to me which particular project it belongs to, or where I could find associated source code.
## Steps to reproduce the bug
1. Navigate to the Section "Additional Information" in the [dataset card creator](https://huggingface.co/datasets/card-creator/)
2. Enter text in an arbitrary field, e.g., "Dataset Curators".
3. Switch back to a previous section, like "Dataset Creation".
4. When switching back again to "Additional Information", the text has been deleted.
Notably, this behavior can be reproduced again and again, it's not just problematic for the first "switch-back" from Additional Information.
## Expected results
For step 4, the previously entered information should still be present in the boxes, similar to the behavior to all other sections (switching back there works as expected)
## Actual results
The text boxes are empty again, and previously entered text got deleted.
## Environment info
- `datasets` version: N/A
- Platform: Firefox 96.0 / Chrome 97.0
- Python version: N/A
- PyArrow version: N/A
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3628/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3628/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5909/comments | https://api.github.com/repos/huggingface/datasets/issues/5909/events | https://github.com/huggingface/datasets/pull/5909 | 1,728,900,068 | PR_kwDODunzps5Rgga6 | 5,909 | Use more efficient and idiomatic way to construct list. | [] | closed | false | null | 3 | 2023-05-27T18:54:47Z | 2023-05-31T15:37:11Z | 2023-05-31T13:28:29Z | null | Using `*` is ~2X faster according to [benchmark](https://colab.research.google.com/gist/ttsugriy/c964a2604edf70c41911b10335729b6a/for-vs-mult.ipynb) with just 4 patterns. This doesn't matter much since this tiny difference is not going to be noticeable, but why not? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5909/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5909/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5909.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5909",
"merged_at": "2023-05-31T13:28:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5909.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5909"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008156 / 0.011353 (-0.003197) | 0.005563 / 0.011008 (-0.005445) | 0.118319 / 0.038508 (0.079810) | 0.044305 / 0.023109 (0.021195) | 0.366221 / 0.275898 (0.090323) | 0.407585 / 0.323480 (0.084105) | 0.006961 / 0.007986 (-0.001024) | 0.004841 / 0.004328 (0.000513) | 0.089949 / 0.004250 (0.085698) | 0.062197 / 0.037052 (0.025144) | 0.360721 / 0.258489 (0.102232) | 0.415332 / 0.293841 (0.121491) | 0.035709 / 0.128546 (-0.092837) | 0.010617 / 0.075646 (-0.065030) | 0.397454 / 0.419271 (-0.021817) | 0.063490 / 0.043533 (0.019958) | 0.374289 / 0.255139 (0.119150) | 0.382827 / 0.283200 (0.099628) | 0.121014 / 0.141683 (-0.020669) | 1.729933 / 1.452155 (0.277779) | 1.896222 / 1.492716 (0.403506) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.254030 / 0.018006 (0.236023) | 0.491225 / 0.000490 (0.490736) | 0.018933 / 0.000200 (0.018734) | 0.000413 / 0.000054 (0.000358) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033085 / 0.037411 (-0.004327) | 0.132837 / 0.014526 (0.118311) | 0.143275 / 0.176557 (-0.033282) | 0.215800 / 0.737135 (-0.521335) | 0.149802 / 0.296338 (-0.146536) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474688 / 0.215209 (0.259479) | 4.743223 / 2.077655 (2.665569) | 2.163107 / 1.504120 (0.658988) | 1.946396 / 1.541195 (0.405201) | 2.057538 / 1.468490 (0.589047) | 0.618836 / 4.584777 (-3.965941) | 4.605934 / 3.745712 (0.860222) | 2.201537 / 5.269862 (-3.068324) | 1.275758 / 4.565676 (-3.289919) | 0.077782 / 0.424275 (-0.346493) | 0.014830 / 0.007607 (0.007223) | 0.593372 / 0.226044 (0.367328) | 5.927000 / 2.268929 (3.658072) | 2.687293 / 55.444624 (-52.757331) | 2.301797 / 6.876477 (-4.574679) | 2.489928 / 2.142072 (0.347856) | 0.756779 / 4.805227 (-4.048449) | 0.168065 / 6.500664 (-6.332600) | 0.077276 / 0.075469 (0.001807) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.608169 / 1.841788 (-0.233619) | 19.048790 / 8.074308 (10.974482) | 16.100228 / 10.191392 (5.908836) | 0.215346 / 0.680424 (-0.465077) | 0.022293 / 0.534201 (-0.511907) | 0.535899 / 0.579283 (-0.043384) | 0.533729 / 0.434364 (0.099365) | 0.562697 / 0.540337 (0.022360) | 0.764082 / 1.386936 (-0.622854) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010087 / 0.011353 (-0.001266) | 0.005357 / 0.011008 (-0.005651) | 0.092678 / 0.038508 (0.054170) | 0.041207 / 0.023109 (0.018098) | 0.437464 / 0.275898 (0.161566) | 0.527867 / 0.323480 (0.204387) | 0.006861 / 0.007986 (-0.001125) | 0.006131 / 0.004328 (0.001802) | 0.093741 / 0.004250 (0.089490) | 0.064142 / 0.037052 (0.027090) | 0.433577 / 0.258489 (0.175088) | 0.537148 / 0.293841 (0.243307) | 0.035339 / 0.128546 (-0.093207) | 0.010432 / 0.075646 (-0.065214) | 0.102838 / 0.419271 (-0.316434) | 0.057905 / 0.043533 (0.014372) | 0.437956 / 0.255139 (0.182817) | 0.509562 / 0.283200 (0.226362) | 0.120620 / 0.141683 (-0.021063) | 1.798686 / 1.452155 (0.346531) | 2.013290 / 1.492716 (0.520574) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.249067 / 0.018006 (0.231061) | 0.462219 / 0.000490 (0.461729) | 0.000476 / 0.000200 (0.000276) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033988 / 0.037411 (-0.003424) | 0.135863 / 0.014526 (0.121337) | 0.144082 / 0.176557 (-0.032474) | 0.201715 / 0.737135 (-0.535421) | 0.152079 / 0.296338 (-0.144259) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.522820 / 0.215209 (0.307611) | 5.216723 / 2.077655 (3.139068) | 2.582355 / 1.504120 (1.078235) | 2.352799 / 1.541195 (0.811604) | 2.451943 / 1.468490 (0.983453) | 0.620381 / 4.584777 (-3.964396) | 4.537841 / 3.745712 (0.792129) | 2.206431 / 5.269862 (-3.063431) | 1.269865 / 4.565676 (-3.295811) | 0.078744 / 0.424275 (-0.345531) | 0.014375 / 0.007607 (0.006768) | 0.648215 / 0.226044 (0.422171) | 6.482809 / 2.268929 (4.213881) | 3.210670 / 55.444624 (-52.233954) | 2.847485 / 6.876477 (-4.028992) | 2.820946 / 2.142072 (0.678873) | 0.762711 / 4.805227 (-4.042516) | 0.171235 / 6.500664 (-6.329429) | 0.080230 / 0.075469 (0.004761) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.646840 / 1.841788 (-0.194948) | 19.400451 / 8.074308 (11.326142) | 16.758845 / 10.191392 (6.567453) | 0.171377 / 0.680424 (-0.509046) | 0.020400 / 0.534201 (-0.513801) | 0.467675 / 0.579283 (-0.111608) | 0.529745 / 0.434364 (0.095381) | 0.605989 / 0.540337 (0.065652) | 0.694659 / 1.386936 (-0.692277) |\n\n</details>\n</details>\n\n\n",
"It's faster because all the items are the same object, but this also means modifying one of them will alter each unless these items are immutable, and they are in this case (tuples). So we should be careful when using this idiom."
] |
https://api.github.com/repos/huggingface/datasets/issues/3614 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3614/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3614/comments | https://api.github.com/repos/huggingface/datasets/issues/3614/events | https://github.com/huggingface/datasets/pull/3614 | 1,110,736,657 | PR_kwDODunzps4xZdCe | 3,614 | Minor fixes | [] | closed | false | null | 0 | 2022-01-21T17:48:44Z | 2022-01-24T12:45:49Z | 2022-01-24T12:45:49Z | null | This PR:
* adds "desc" to the `ignore_kwargs` list in `Dataset.filter`
* fixes the default value of `id` in `DatasetDict.prepare_for_task` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3614/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3614/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3614.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3614",
"merged_at": "2022-01-24T12:45:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3614.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3614"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2873/comments | https://api.github.com/repos/huggingface/datasets/issues/2873/events | https://github.com/huggingface/datasets/pull/2873 | 989,587,695 | MDExOlB1bGxSZXF1ZXN0NzI4MzA0MTMw | 2,873 | adding swedish_medical_ner | [] | closed | false | null | 2 | 2021-09-07T04:44:53Z | 2021-09-17T20:47:37Z | 2021-09-17T20:47:37Z | null | Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
Code refactored | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2873/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2873/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2873.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2873",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2873.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2873"
} | true | [
"Hi, what's the current status of this request? It says Changes requested, but I can't see what changes?",
"Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.\r\n\r\nFeel free to remove these changes, or simply create a new PR that only contains the addition of the dataset"
] |
https://api.github.com/repos/huggingface/datasets/issues/2791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2791/comments | https://api.github.com/repos/huggingface/datasets/issues/2791/events | https://github.com/huggingface/datasets/pull/2791 | 968,360,314 | MDExOlB1bGxSZXF1ZXN0NzEwNDgxNDAy | 2,791 | Fix typo in cnn_dailymail | [] | closed | false | null | 0 | 2021-08-12T08:38:42Z | 2021-08-12T11:17:59Z | 2021-08-12T11:17:59Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2791/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2791/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2791.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2791",
"merged_at": "2021-08-12T11:17:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2791.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2791"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4215/comments | https://api.github.com/repos/huggingface/datasets/issues/4215/events | https://github.com/huggingface/datasets/pull/4215 | 1,214,579,162 | PR_kwDODunzps42uuhY | 4,215 | Add `drop_last_batch` to `IterableDataset.map` | [] | closed | false | null | 1 | 2022-04-25T14:15:19Z | 2022-05-03T15:56:07Z | 2022-05-03T15:48:54Z | null | Addresses this comment: https://github.com/huggingface/datasets/pull/3801#pullrequestreview-901736921 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4215/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4215/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4215.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4215",
"merged_at": "2022-05-03T15:48:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4215.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4215"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5451/comments | https://api.github.com/repos/huggingface/datasets/issues/5451/events | https://github.com/huggingface/datasets/issues/5451 | 1,552,336,300 | I_kwDODunzps5chsWs | 5,451 | ImageFolder BadZipFile: Bad offset for central directory | [] | closed | false | null | 3 | 2023-01-22T23:50:12Z | 2023-05-23T10:35:48Z | 2023-02-10T16:31:36Z | null | ### Describe the bug
I'm getting the following exception:
```
lib/python3.10/zipfile.py:1353 in _RealGetContents β
β β
β 1350 β β # self.start_dir: Position of start of central directory β
β 1351 β β self.start_dir = offset_cd + concat β
β 1352 β β if self.start_dir < 0: β
β β± 1353 β β β raise BadZipFile("Bad offset for central directory") β
β 1354 β β fp.seek(self.start_dir, 0) β
β 1355 β β data = fp.read(size_cd) β
β 1356 β β fp = io.BytesIO(data) β
β°βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―
BadZipFile: Bad offset for central directory
Extracting data files: 35%|ββββββββββββββββββ | 38572/110812 [00:10<00:20, 3576.26it/s]
```
### Steps to reproduce the bug
```
load_dataset(
args.dataset_name,
args.dataset_config_name,
cache_dir=args.cache_dir,
),
```
### Expected behavior
loads the dataset
### Environment info
datasets==2.8.0
Python 3.10.8
Linux 129-146-3-202 5.15.0-52-generic #58~20.04.1-Ubuntu SMP Thu Oct 13 13:09:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5451/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5451/timeline | null | completed | null | null | false | [
"Hi ! Could you share the full stack trace ? Which dataset did you try to load ?\r\n\r\nit may be related to https://github.com/huggingface/datasets/pull/5640",
"The `BadZipFile` error means the ZIP file is corrupted, so I'm closing this issue as it's not directly related to `datasets`.",
"For others that find this issue following a `BadZipFile` error, I had the same problem because I had a file in a folder dataset `my-image.target` and the datasets library was incorrectly determining that the (PNG) file was a zip archive. When it tried to extract the file, this error occurred. \r\n\r\nUpdating to `datasets==2.12.0` fixed the problem for me."
] |
https://api.github.com/repos/huggingface/datasets/issues/5575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5575/comments | https://api.github.com/repos/huggingface/datasets/issues/5575/events | https://github.com/huggingface/datasets/issues/5575 | 1,598,396,552 | I_kwDODunzps5fRZiI | 5,575 | Metadata for each column | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"closed_at": null,
"closed_issues": 0,
"created_at": "2023-02-13T16:22:42Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
},
"description": "Next major release",
"due_on": null,
"html_url": "https://github.com/huggingface/datasets/milestone/10",
"id": 9038583,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/10/labels",
"node_id": "MI_kwDODunzps4Aier3",
"number": 10,
"open_issues": 3,
"state": "open",
"title": "3.0",
"updated_at": "2023-04-12T17:00:57Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/10"
} | 3 | 2023-02-24T10:53:44Z | 2023-03-10T17:04:04Z | null | null | ### Feature request
Being able to put some metadata for each column as a string or any other type.
### Motivation
I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which one works better in our downstream task, here as workaround right now what I do is the compute the hash of the preprocessing that the images went through as part of the new columns name, it would be nice to attach some kinda meta data in these scenarios to the each columns. metadata
### Your contribution
Maybe we could map another relational like database as the metadata? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5575/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5575/timeline | null | null | null | null | false | [
"Hi! Indeed it would be useful to support this. PyArrow natively supports schema-level and column-level metadata, so implementing this should be straightforward. The API I have in mind would work as follows:\r\n```python\r\ncol_feature = Value(\"string\", metadata=\"Some column-level metadata\")\r\n\r\nfeatures = Features({\"col\": col_feature}, metadata=\"Some schema-level metadata\")\r\n```\r\n\r\nWDYT?",
"Sorry for the late reply, \r\nYes, I think this is the most straight-forward approach with the things that we already have.\r\n\r\n",
"@mariosasko Let me know how I can help."
] |
https://api.github.com/repos/huggingface/datasets/issues/3560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3560/comments | https://api.github.com/repos/huggingface/datasets/issues/3560/events | https://github.com/huggingface/datasets/pull/3560 | 1,098,280,652 | PR_kwDODunzps4wwOMf | 3,560 | Run pyupgrade for Python 3.6+ | [] | closed | false | null | 3 | 2022-01-10T19:20:53Z | 2022-01-31T13:38:49Z | 2022-01-31T09:37:34Z | null | Run the command:
```bash
pyupgrade $(find . -name "*.py" -type f) --py36-plus
```
Which mainly avoids unnecessary lists creations and also removes unnecessary code for Python 3.6+.
It was originally part of #3489.
Tip for reviewing faster: use the CLI (`git diff`) and scroll. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3560/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3560/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3560.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3560",
"merged_at": "2022-01-31T09:37:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3560.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3560"
} | true | [
"Hi ! Thanks for the change :)\r\nCould it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.",
"> Hi ! Thanks for the change :)\r\n> Could it be possible to only run it for the code in `src/` ? We try to not change the code in the `datasets/` directory too often since it refreshes the users cache when they upgrade `datasets`.\r\n\r\nI reverted the changes in `datasets/` instead of changing only `src/`. Does it sound good?",
"I just resolved some conflicts with the master branch. If the CI is green we can merge :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3178/comments | https://api.github.com/repos/huggingface/datasets/issues/3178/events | https://github.com/huggingface/datasets/issues/3178 | 1,039,539,076 | I_kwDODunzps499huE | 3,178 | "Property couldn't be hashed properly" even though fully picklable | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 23 | 2021-10-29T12:56:09Z | 2023-01-04T15:33:16Z | 2022-11-02T17:18:43Z | null | ## Describe the bug
I am trying to tokenize a dataset with spaCy. I found that no matter what I do, the spaCy language object (`nlp`) prevents `datasets` from pickling correctly - or so the warning says - even though manually pickling is no issue. It should not be an issue either, since spaCy objects are picklable.
## Steps to reproduce the bug
Here is a [colab](https://colab.research.google.com/drive/1gt75LCBIzsmBMvvipEOvWulvyZseBiA7?usp=sharing) but for some reason I cannot reproduce it there. That may have to do with logging/tqdm on Colab, or with running things in notebooks. I tried below code on Windows and Ubuntu as a Python script and getting the same issue (warning below).
```python
import pickle
from datasets import load_dataset
import spacy
class Processor:
def __init__(self):
self.nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner", "lemmatizer"])
@staticmethod
def collate(batch):
return [d["en"] for d in batch]
def parse(self, batch):
batch = batch["translation"]
return {"translation_tok": [{"en_tok": " ".join([t.text for t in doc])} for doc in self.nlp.pipe(self.collate(batch))]}
def process(self):
ds = load_dataset("wmt16", "de-en", split="train[:10%]")
ds = ds.map(self.parse, batched=True, num_proc=6)
if __name__ == '__main__':
pr = Processor()
# succeeds
with open("temp.pkl", "wb") as f:
pickle.dump(pr, f)
print("Successfully pickled!")
pr.process()
```
---
Here is a small change that includes `Hasher.hash` that shows that the hasher cannot seem to successfully pickle parts form the NLP object.
```python
from datasets.fingerprint import Hasher
import pickle
from datasets import load_dataset
import spacy
class Processor:
def __init__(self):
self.nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner", "lemmatizer"])
@staticmethod
def collate(batch):
return [d["en"] for d in batch]
def parse(self, batch):
batch = batch["translation"]
return {"translation_tok": [{"en_tok": " ".join([t.text for t in doc])} for doc in self.nlp.pipe(self.collate(batch))]}
def process(self):
ds = load_dataset("wmt16", "de-en", split="train[:10]")
return ds.map(self.parse, batched=True)
if __name__ == '__main__':
pr = Processor()
# succeeds
with open("temp.pkl", "wb") as f:
pickle.dump(pr, f)
print("Successfully pickled class instance!")
# succeeds
with open("temp.pkl", "wb") as f:
pickle.dump(pr.nlp, f)
print("Successfully pickled nlp!")
# fails
print(Hasher.hash(pr.nlp))
pr.process()
```
## Expected results
This to be picklable, working (fingerprinted), and no warning.
## Actual results
In the first snippet, I get this warning
Parameter 'function'=<function Processor.parse at 0x7f44982247a0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
In the second, I get this traceback which directs to the `Hasher.hash` line.
```
Traceback (most recent call last):
File " \Python\Python36\lib\pickle.py", line 918, in save_global
obj2, parent = _getattribute(module, name)
File " \Python\Python36\lib\pickle.py", line 266, in _getattribute
.format(name, obj))
AttributeError: Can't get local attribute 'add_codes.<locals>.ErrorsWithCodes' on <function add_codes at 0x00000296FF606EA0>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File " scratch_4.py", line 40, in <module>
print(Hasher.hash(pr.nlp))
File " \lib\site-packages\datasets\fingerprint.py", line 191, in hash
return cls.hash_default(value)
File " \lib\site-packages\datasets\fingerprint.py", line 184, in hash_default
return cls.hash_bytes(dumps(value))
File " \lib\site-packages\datasets\utils\py_utils.py", line 345, in dumps
dump(obj, file)
File " \lib\site-packages\datasets\utils\py_utils.py", line 320, in dump
Pickler(file, recurse=True).dump(obj)
File " \lib\site-packages\dill\_dill.py", line 498, in dump
StockPickler.dump(self, obj)
File " \Python\Python36\lib\pickle.py", line 409, in dump
self.save(obj)
File " \Python\Python36\lib\pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File " \Python\Python36\lib\pickle.py", line 634, in save_reduce
save(state)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File " \Python\Python36\lib\pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems
save(v)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 781, in save_list
self._batch_appends(obj)
File " \Python\Python36\lib\pickle.py", line 805, in _batch_appends
save(x)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 736, in save_tuple
save(element)
File " \Python\Python36\lib\pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File " \Python\Python36\lib\pickle.py", line 634, in save_reduce
save(state)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 736, in save_tuple
save(element)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File " \Python\Python36\lib\pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems
save(v)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 1176, in save_instancemethod0
pickler.save_reduce(MethodType, (obj.__func__, obj.__self__), obj=obj)
File " \Python\Python36\lib\pickle.py", line 610, in save_reduce
save(args)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 736, in save_tuple
save(element)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\datasets\utils\py_utils.py", line 523, in save_function
obj=obj,
File " \Python\Python36\lib\pickle.py", line 610, in save_reduce
save(args)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \Python\Python36\lib\pickle.py", line 751, in save_tuple
save(element)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File " \Python\Python36\lib\pickle.py", line 821, in save_dict
self._batch_setitems(obj.items())
File " \Python\Python36\lib\pickle.py", line 847, in _batch_setitems
save(v)
File " \Python\Python36\lib\pickle.py", line 521, in save
self.save_reduce(obj=obj, *rv)
File " \Python\Python36\lib\pickle.py", line 605, in save_reduce
save(cls)
File " \Python\Python36\lib\pickle.py", line 476, in save
f(self, obj) # Call unbound method with explicit self
File " \lib\site-packages\dill\_dill.py", line 1439, in save_type
StockPickler.save_global(pickler, obj, name=name)
File " \Python\Python36\lib\pickle.py", line 922, in save_global
(obj, module_name, name))
_pickle.PicklingError: Can't pickle <class 'spacy.errors.add_codes.<locals>.ErrorsWithCodes'>: it's not found as spacy.errors.add_codes.<locals>.ErrorsWithCodes
```
## Environment info
Tried on both Linux and Windows
- `datasets` version: 1.14.0
- Platform: Windows-10-10.0.19041-SP0 + Python 3.7.9; Linux-5.11.0-38-generic-x86_64-with-Ubuntu-20.04-focal + Python 3.7.12
- PyArrow version: 6.0.0
| {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3178/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3178/timeline | null | completed | null | null | false | [
"After some digging, I found that this is caused by `dill` and using `recurse=True)` when trying to dump the object. The problem also occurs without multiprocessing. I can only find [the following information](https://dill.readthedocs.io/en/latest/dill.html#dill._dill.dumps) about this:\r\n\r\n> If recurse=True, then objects referred to in the global dictionary are recursively traced and pickled, instead of the default behavior of attempting to store the entire global dictionary. This is needed for functions defined via exec().\r\n\r\nIn the utils, this is explicitly enabled\r\n\r\nhttps://github.com/huggingface/datasets/blob/df63614223bf1dd1feb267d39d741bada613352c/src/datasets/utils/py_utils.py#L327-L330\r\n\r\nIs this really necessary? Is there a way around it? Also pinging the spaCy team in case this is easy to solve on their end. (I hope so.)",
"Hi ! Thanks for reporting\r\n\r\nYes `recurse=True` is necessary to be able to hash all the objects that are passed to the `map` function\r\n\r\nEDIT: hopefully this object can be serializable soon, but otherwise we can consider adding more control to the user on how to hash objects that are not serializable (as mentioned in https://github.com/huggingface/datasets/issues/3044#issuecomment-948818210)",
"I submitted a PR to spacy that should fix this issue (linked above). I'll leave this open until that PR is merged. ",
"@lhoestq After some testing I find that even with the updated spaCy, no cache files are used. I do not get any warnings though, but I can see that map is run every time I run the code. Do you have thoughts about why? If you want to try the tests below, make sure to install spaCy from [here](https://github.com/BramVanroy/spaCy) and installing the base model with `python -m spacy download en_core_web_sm`.\r\n\r\n```python\r\nfrom functools import partial\r\nfrom pathlib import Path\r\n\r\nimport spacy\r\nfrom datasets import Dataset\r\nimport datasets\r\ndatasets.logging.set_verbosity_debug()\r\n\r\ndef tokenize(nlp, l):\r\n return {\"tok\": [t.text for t in nlp(l[\"text\"])]}\r\n\r\ndef main():\r\n fin = r\"some/file/with/many/lines\"\r\n lines = Path(fin).read_text(encoding=\"utf-8\").splitlines()\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n ds = Dataset.from_dict({\"text\": lines, \"text_id\": list(range(len(lines)))})\r\n tok = partial(tokenize, nlp)\r\n ds = ds.map(tok, load_from_cache_file=True)\r\n print(ds[0:2])\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\n... or with load_dataset (here I get the message that `load_dataset` can reuse the dataset, but still I see all samples being processed via the tqdm progressbar):\r\n\r\n```python\r\nfrom functools import partial\r\n\r\nimport spacy\r\nfrom datasets import load_dataset\r\nimport datasets\r\ndatasets.logging.set_verbosity_debug()\r\n\r\ndef tokenize(nlp, sample):\r\n return {\"tok\": [t.text for t in nlp(sample[\"text\"])]}\r\n\r\ndef main():\r\n fin = r\"some/file/with/many/lines\"\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n tok_func = partial(tokenize, nlp)\r\n ds = load_dataset('text', data_files=fin)\r\n ds = ds[\"train\"].map(tok_func)\r\n print(ds[0:2])\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"It looks like every time you load `en_core_web_sm` you get a different python object:\r\n```python\r\nimport spacy\r\nfrom datasets.fingerprint import Hasher\r\n\r\nnlp1 = spacy.load(\"en_core_web_sm\")\r\nnlp2 = spacy.load(\"en_core_web_sm\")\r\nHasher.hash(nlp1), Hasher.hash(nlp2)\r\n# ('f6196a33882fea3b', 'a4c676a071f266ff')\r\n```\r\nHere is a list of attributes that have different hashes for `nlp1` and `nlp2`:\r\n- tagger\r\n- parser\r\n- entity\r\n- pipeline (it's the list of the three attributes above)\r\n\r\nI just took a look at the tagger for example and I found subtle differences (there may be other differences though):\r\n```python\r\nnlp1.tagger.model.tok2vec.embed.id, nlp2.tagger.model.tok2vec.embed.id\r\n# (1721, 2243)\r\n```\r\n\r\nWe can try to find all the differences and find the best way to hash those objects properly",
"Thanks for searching! I went looking, and found that this is an implementation detail of thinc\r\n\r\nhttps://github.com/explosion/thinc/blob/68691e303ae68cae4bc803299016f1fc064328bf/thinc/model.py#L96-L98\r\n\r\nPresumably (?) exactly to distinguish between different parts in memory when multiple models are loaded. Do not think that this can be changed on their end - but I will ask what exactly it is for (I'm curious).\r\n\r\nDo you think it is overkill to write something into the hasher explicitly to deal with spaCy models? It seems like something that is beneficial to many, but I do not know if you are open to adding third-party-specific ways to deal with this. If you are, I can have a look for this specific case how we can ignore `thinc.Model.id` from the hasher.",
"It can be even simpler to hash the bytes of the pipeline instead\r\n```python\r\nnlp1.to_bytes() == nlp2.to_bytes() # True\r\n```\r\n\r\nIMO we should integrate the custom hashing for spacy models into `datasets` (we use a custom Pickler for that).\r\nWhat could be done on Spacy's side instead (if they think it's nice to have) is to implement a custom pickling for these classes using `to_bytes`/`from_bytes` to have deterministic pickle dumps.\r\n\r\nFinally I think it would be nice in the future to add an API to let `datasets` users control this kind of things. Something like being able to define your own hashing if you use complex objects.\r\n```python\r\[email protected]_hash(spacy.language.Language)\r\ndef hash_spacy_language(nlp):\r\n return Hasher.hash(nlp.to_bytes())\r\n```",
"I do not quite understand what you mean. as far as I can tell, using `to_bytes` does a pickle dump behind the scene (with `srsly`), recursively using `to_bytes` on the required objects. Therefore, the result of `to_bytes` is a deterministic pickle dump AFAICT. Or do you mean that you wish that using your own pickler and running `dumps(nlp)` should also be deterministic? I guess that would require `__setstate__` and `__getstate__` methods on all the objects that have to/from_bytes. I'll have a listen over at spaCy what they think, and if that would solve the issue. I'll try this locally first, if I find the time.\r\n\r\nI agree that having the option to use a custom hasher would be useful. I like your suggestion!\r\n\r\nEDIT: after trying some things and reading through their API, it seems that they explicitly do not want this. https://spacy.io/usage/saving-loading#pipeline\r\n\r\n> When serializing the pipeline, keep in mind that this will only save out the binary data for the individual components to allow spaCy to restore them β not the entire objects. This is a good thing, because it makes serialization safe. But it also means that you have to take care of storing the config, which contains the pipeline configuration and all the relevant settings.\r\n\r\nBest way forward therefore seems to implement the ability to specify a hasher depending on the objects that are pickled, as you suggested. I can work on this if that is useful. I could use some pointers as to how you would like to implement the `register_hash` functionality though. I assume using `catalogue` over at Explosion might be a good starting point.\r\n\r\n",
"Interestingly, my PR does not solve the issue discussed above. The `tokenize` function hash is different on every run, because for some reason `nlp.__call__` has a different hash every time. The issue therefore seems to run much deeper than I thought. If you have any ideas, I'm all ears.\r\n\r\n```shell\r\ngit clone https://github.com/explosion/spaCy.git\r\ncd spaCy/\r\ngit checkout cab9209c3dfcd1b75dfe5657f10e52c4d847a3cf\r\ncd ..\r\n\r\ngit clone https://github.com/BramVanroy/datasets.git\r\ncd datasets\r\ngit checkout registry\r\npip install -e .\r\npip install ../spaCy\r\nspacy download en_core_web_sm\r\n```\r\n\r\n```python\r\nimport spacy\r\n\r\nfrom datasets import load_dataset\r\nfrom datasets.fingerprint import Hasher\r\nfrom datasets.utils.registry import hashers\r\n\r\[email protected](spacy.Language)\r\ndef hash_spacy_language(nlp):\r\n return Hasher.hash(nlp.to_bytes())\r\n\r\ndef main():\r\n fin = r\"your/large/file\"\r\n nlp = spacy.load(\"en_core_web_sm\")\r\n # This is now always the same yay!\r\n print(Hasher.hash(nlp))\r\n\r\n def tokenize(l):\r\n return {\"tok\": [t.text for t in nlp(l[\"text\"])]}\r\n\r\n ds = load_dataset(\"text\", data_files=fin)\r\n # But this is not...\r\n print(Hasher.hash(tokenize))\r\n # ... because of this\r\n print(Hasher.hash(nlp.__call__))\r\n ds = ds[\"train\"].map(tokenize)\r\n print(ds[0:2])\r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```",
"Hi ! I just answered in your PR :) In order for your custom hashing to be used for nested objects, you must integrate it into our recursive pickler that we use for hashing.",
"I don't quite understand the design constraints of `datasets` or the script that you're running, but my usual advice is to avoid using pickle unless you _absolutely_ have to. So for instance instead of doing your `partial` over the `nlp` object itself, can you just pass the string `en_core_web_sm` in? This will mean calling `spacy.load()` inside the work function, but this is no worse than having to call `pickle.load()` on the contents of the NLP object anyway -- in fact you'll generally find `spacy.load()` faster, apart from the disk read.\r\n\r\nIf you need to pass in the bytes data and don't want to read from disk, you could do something like this:\r\n\r\n```\r\nmsg = (nlp.lang, nlp.to_bytes())\r\n\r\ndef unpack(lang, bytes_data):\r\n return spacy.blank(lang).from_bytes(bytes_data)\r\n```\r\n\r\nI think that should probably work: the Thinc `model.to_dict()` method (which is used by the `model.to_bytes()` method) doesn't pack the model's ID into the message, so the `nlp.to_bytes()` that you get shouldn't be affected by the global IDs. So you should get a clean message from `nlp.to_bytes()` that doesn't depend on the global state.",
"Hi Matthew, thanks for chiming in! We are currently implementing exactly what you suggest: `to_bytes()` as a default before pickling - but we may prefer `to_dict` to avoid double dumping.\r\n\r\n`datasets` uses pickle dumps (actually dill) to get unique representations of processing steps (a \"fingerprint\" or hash). So it never needs to re-load that dump - it just needs its value to create a hash. If a fingerprint is identical to a cached fingerprint, then the result can be retrieved from the on-disk cache. (@lhoestq or @mariosasko can correct me if I'm wrong.)\r\n\r\nI was experiencing the issue that parsing with spaCy gave me a different fingerprint on every run of the script and thus it could never load the processed dataset from cache. At first I thought the reason was that spaCy Language objects were not picklable with recursive dill, but even after [adjusting for that](https://github.com/explosion/spaCy/pull/9593) the issue persisted. @lhoestq found that this is due to the changing `id`, which you discussed [here](https://github.com/explosion/spaCy/discussions/9609#discussioncomment-1661081). So yes, you are right. On the surface there simply seems to be an incompatibility between `datasets` default caching functionality as it is currently implemented and `spacy.Language`.\r\n\r\nThe [linked PR](https://github.com/huggingface/datasets/pull/3224) aims to remedy that, though. Up to now I have put some effort into making it easier to define your own \"pickling\" function for a given type (and optionally any of its subclasses). That allows us to tell `datasets` that instead of doing `dill.save(nlp)` (non-deterministic), to use `dill.save(nlp.to_bytes())` (deterministic). When I find some more time, the PR [will be expanded](https://github.com/huggingface/datasets/pull/3224#issuecomment-968958528) to improve the user-experience a bit and add a built-in function to pickle `spacy.Language` as one of the defaults (using `to_bytes()`).",
"Is there a workaround for this? maybe by explicitly requesting datasets to cache the result of `.map()`?",
"Hi ! If your function is not picklable, then the fingerprint of the resulting dataset can't be computed. The fingerprint is a hash that is used by the cache to reload previously computed datasets: the dataset file is named `cache-<fingerprint>.arrow` in your dataset's cache directory.\r\n\r\nAs a workaround you can set the fingerprint that is going to be used by the cache:\r\n```python\r\nresult = my_dataset.map(func, new_fingerprint=new_fingerprint)\r\n```\r\nAny future call to `map` with the same `new_fingerprint` will reload the result from the cache.\r\n\r\n**Be careful using this though: if you change your `func`, be sure to change the `new_fingerprint` as well.**",
"I've been having an issue that might be related to this when trying to pre-tokenize a corpus and caching it for using it later in the pre-training of a RoBERTa model. I always get the following warning:\r\n\r\n```\r\nDataset text downloaded and prepared to /gpfswork/rech/project/user/.cache/hf-datasets/text/default-1850886023af0077/0.0.0/acc32f2f2ef863c93c2f30c52f7df6cc9053a1c2230b8d7da0d210404683ca08. Subsequent calls will reuse this data.\r\nParameter 'function'=<function encode_dataset.<locals>.<lambda> at 0x14a92157b280> of the transform [email protected] couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n```\r\n\r\nAnd when I launch the pre-training the pre-tokenized corpus is not found and it is tokenized again, which makes me waste precious GPU hours.\r\n\r\nFor me, the workaround was downgrading `dill` and `multiprocess` to the following versions:\r\n\r\n```\r\ndill 0.3.4\r\nmultiprocess 0.70.12.2 \r\n```",
"> Hi ! If your function is not picklable, then the fingerprint of the resulting dataset can't be computed. The fingerprint is a hash that is used by the cache to reload previously computed datasets: the dataset file is named `cache-<fingerprint>.arrow` in your dataset's cache directory.\r\n> \r\n> As a workaround you can set the fingerprint that is going to be used by the cache:\r\n> \r\n> ```python\r\n> result = my_dataset.map(func, new_fingerprint=new_fingerprint)\r\n> ```\r\n> \r\n> Any future call to `map` with the same `new_fingerprint` will reload the result from the cache.\r\n> \r\n> **Be careful using this though: if you change your `func`, be sure to change the `new_fingerprint` as well.**\r\n\r\nIs the argument `new_fingerprint` available for datasetDict ? I can only use it on arrow datasets but might be useful to generalize it to DatasetDict's map as well ? @lhoestq ",
"> I've been having an issue that might be related to this when trying to pre-tokenize a corpus and caching it for using it later in the pre-training of a RoBERTa model. I always get the following warning:\r\n> \r\n> ```\r\n> Dataset text downloaded and prepared to /gpfswork/rech/project/user/.cache/hf-datasets/text/default-1850886023af0077/0.0.0/acc32f2f2ef863c93c2f30c52f7df6cc9053a1c2230b8d7da0d210404683ca08. Subsequent calls will reuse this data.\r\n> Parameter 'function'=<function encode_dataset.<locals>.<lambda> at 0x14a92157b280> of the transform [email protected] couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\n> ```\r\n> \r\n> And when I launch the pre-training the pre-tokenized corpus is not found and it is tokenized again, which makes me waste precious GPU hours.\r\n> \r\n> For me, the workaround was downgrading `dill` and `multiprocess` to the following versions:\r\n> \r\n> ```\r\n> dill 0.3.4\r\n> multiprocess 0.70.12.2 \r\n> ```\r\n\r\nThis worked for me - thanks!",
"I see this has just been closed - it seems quite relevant to another tokenizer I have been trying to use, the `vinai/phobert` family of tokenizers\r\n\r\nhttps://huggingface.co/vinai/phobert-base\r\nhttps://huggingface.co/vinai/phobert-large\r\n\r\nI ran into an issue where a large dataset took several hours to tokenize, the process hung, and I was unable to use the cached version of the tokenized data:\r\n\r\nhttps://discuss.huggingface.co/t/cache-parallelize-long-tokenization-step/25791/3\r\n\r\nI don't see any way to specify the hash of the tokenizer or the fingerprint of the tokenized data to use, so is the tokenized dataset basically lost at this point? Is there a good way to avoid this happening again if I retokenize the data?\r\n",
"In your case it looks like the job failed before caching the data - maybe one of the processes crashed",
"Interesting. Thanks for the observation. Any suggestions on how to start tracking that down? Perhaps run it singlethreaded and see if it crashes?",
"You can monitor your RAM and disk space in case a process dies from OOM or disk full, and when it hangs you can check how many processes are running. IIRC there are other start methods for multiprocessing in python that may show an error message if a process dies.\r\n\r\nRunning on a single process can also help debugging this indeed",
"https://github.com/huggingface/datasets/issues/3178#issuecomment-1189435462\r\n\r\nThe solution does not solve for using commonvoice dataset (\"mozilla-foundation/common_voice_11_0\")",
"Hi @tung-msol could you open a new issue and share the error you got and the map function you used ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/826/comments | https://api.github.com/repos/huggingface/datasets/issues/826/events | https://github.com/huggingface/datasets/issues/826 | 739,976,716 | MDU6SXNzdWU3Mzk5NzY3MTY= | 826 | [GEM] Add E2E dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 0 | 2020-11-10T14:50:40Z | 2020-12-03T13:37:57Z | 2020-12-03T13:37:57Z | null | ## Adding a Dataset
- **Name:** E2E NLG dataset (for End-to-end natural language generation)
- **Description:**a dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, the datasets consists of 5,751 dialogue-act Meaning Representations (structured data) and 8.1 reference free-text utterances per dialogue-act on average
- **Paper:** https://arxiv.org/pdf/1706.09254.pdf https://arxiv.org/abs/1901.07931
- **Data:** http://www.macs.hw.ac.uk/InteractionLab/E2E/#data
- **Motivation:** This dataset will likely be included in the GEM shared task
Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/826/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/826/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2120/comments | https://api.github.com/repos/huggingface/datasets/issues/2120/events | https://github.com/huggingface/datasets/issues/2120 | 841,954,521 | MDU6SXNzdWU4NDE5NTQ1MjE= | 2,120 | dataset viewer does not work anymore | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | 2 | 2021-03-26T13:22:13Z | 2021-03-26T15:52:22Z | 2021-03-26T15:52:22Z | null | Hi
I normally use this link to see all datasets and how I can load them
https://huggingface.co/datasets/viewer/
Now I am getting
502 Bad Gateway
nginx/1.18.0 (Ubuntu)
could you bring this webpage back ? this was very helpful @lhoestq
thanks for your help | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2120/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2120/timeline | null | completed | null | null | false | [
"Thanks for reporting :) We're looking into it",
"Back up. "
] |
https://api.github.com/repos/huggingface/datasets/issues/3221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3221/comments | https://api.github.com/repos/huggingface/datasets/issues/3221/events | https://github.com/huggingface/datasets/pull/3221 | 1,045,890,512 | PR_kwDODunzps4uJp4Z | 3,221 | Resolve data_files by split name | [] | closed | false | null | 4 | 2021-11-05T14:07:35Z | 2021-11-08T13:52:20Z | 2021-11-05T17:49:58Z | null | As discussed in https://github.com/huggingface/datasets/issues/3027 we should automatically infer what file is supposed to go to what split automatically, based on filenames.
I added the support for different kinds of patterns, for both dataset repositories and local directories:
```
Input structure:
my_dataset_repository/
βββ README.md
βββ dataset.csv
Output patterns:
{"train": ["*"]}
```
```
Input structure:
my_dataset_repository/
βββ README.md
βββ train.csv
βββ test.csv
my_dataset_repository/
βββ README.md
βββ data/
βββ train.csv
βββ test.csv
my_dataset_repository/
βββ README.md
βββ train_0.csv
βββ train_1.csv
βββ train_2.csv
βββ train_3.csv
βββ test_0.csv
βββ test_1.csv
Output patterns:
{"train": ["*train*"], "test": ["*test*"]}
```
```
Input structure:
my_dataset_repository/
βββ README.md
βββ data/
βββ train/
β βββ shard_0.csv
β βββ shard_1.csv
β βββ shard_2.csv
β βββ shard_3.csv
βββ test/
βββ shard_0.csv
βββ shard_1.csv
Output patterns:
{"train": ["*train*/*", "*train*/**/*"], "test": ["*test*/*", "*test*/**/*"]}
```
and also this pattern that allows to have custom split names, and that is the structure used by #3098 for `push_to_hub` (cc @LysandreJik ):
```
Input structure:
my_dataset_repository/
βββ README.md
βββ data/
βββ train-00000-of-00003.csv
βββ train-00001-of-00003.csv
βββ train-00002-of-00003.csv
βββ test-00000-of-00001.csv
βββ random-00000-of-00003.csv
βββ random-00001-of-00003.csv
βββ random-00002-of-00003.csv
Output patterns:
{
"train": ["data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"],
"test": ["data/test-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"],
"random": ["data/random-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"],
}
```
You can check the documentation about structuring your repository [here](https://52640-250213286-gh.circle-artifacts.com/0/docs/_build/html/repository_structure.html). cc @stevhliu
Fix https://github.com/huggingface/datasets/issues/3027
Fix https://github.com/huggingface/datasets/issues/3212
In the future we can also add support for dataset configurations. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3221/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3221.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3221",
"merged_at": "2021-11-05T17:49:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3221.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3221"
} | true | [
"Really cool!\r\nWhen splitting by folder, what do we use for validation set (\"valid\", \"validation\" or both)?",
"> When splitting by folder, what do we use for validation set (\"valid\", \"validation\" or both)?\r\n\r\nBoth are fine :) As soon as it has \"valid\" in it",
"Merging for now, if you have comments about the documentation we can address them in subsequent PRs :)",
"Thanks for the comments @stevhliu :) I just opened https://github.com/huggingface/datasets/pull/3233 to take them into account"
] |
https://api.github.com/repos/huggingface/datasets/issues/1993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1993/comments | https://api.github.com/repos/huggingface/datasets/issues/1993/events | https://github.com/huggingface/datasets/issues/1993 | 822,758,387 | MDU6SXNzdWU4MjI3NTgzODc= | 1,993 | How to load a dataset with load_from disk and save it again after doing transformations without changing the original? | [] | closed | false | null | 7 | 2021-03-05T05:25:50Z | 2021-03-22T04:05:50Z | 2021-03-22T04:05:50Z | null | I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place.
When I save the dataset with **save_to_disk**, the original dataset which is already in the disk also gets updated. I do not want to update it. How to prevent from this?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1993/timeline | null | completed | null | null | false | [
"Hi ! That looks like a bug, can you provide some code so that we can reproduce ?\r\nIt's not supposed to update the original dataset",
"Hi, I experimented with RAG. \r\n\r\nActually, you can run the [use_own_knowldge_dataset.py](https://github.com/shamanez/transformers/blob/rag-end-to-end-retrieval/examples/research_projects/rag/use_own_knowledge_dataset.py#L80). In the 80 you can save the dataset object to the disk with save_to_disk. Then in order to compute the embeddings in this use **load_from_disk**. \r\n\r\nThen finally save it. You can see the original dataset object (CSV after splitting also will be changed)\r\n\r\nOne more thing- when I save the dataset object with **save_to_disk** it name the arrow file with cache.... rather than using dataset. arrow. Can you add a variable that we can feed a name to save_to_disk function?",
"@lhoestq I also found that cache in tmp directory gets updated after transformations. This is really problematic when using datasets interactively. Let's say we use the shards function to a dataset loaded with csv, atm when we do transformations to shards and combine them it updates the original csv cache. ",
"I plan to update the save_to_disk method in #2025 so I can make sure the new save_to_disk doesn't corrupt your cache files.\r\nBut from your last message it looks like save_to_disk isn't the root cause right ?",
"ok, one more thing. When we use save_to_disk there are two files other than .arrow. dataset_info.json and state.json. Sometimes most of the fields in the dataset_infor.json are null, especially when saving dataset objects. Anyways I think load_from_disk uses the arrow files mentioned in state.json right? ",
"> Anyways I think load_from_disk uses the arrow files mentioned in state.json right?\r\n\r\nYes exactly",
"Perfect. For now, I am loading the dataset from CSV in my interactive process and will wait until you make the PR!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5160/comments | https://api.github.com/repos/huggingface/datasets/issues/5160/events | https://github.com/huggingface/datasets/issues/5160 | 1,422,193,938 | I_kwDODunzps5UxPUS | 5,160 | Automatically add filename for image/audio folder | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 10 | 2022-10-25T09:56:49Z | 2022-10-26T16:51:46Z | null | null | ### Feature request
When creating a custom audio of image dataset, it would be great to automatically have access to the filename. It should be both:
a) Automatically displayed in the viewer
b) Automatically added as a column to the dataset when doing `load_dataset`
In `diffusers` our test rely quite heavily on images and audio files now and it's a bit tedious at the moment to download specific images from a datasets repo.
E.g. we have a dataset of images for tests in `diffusers`: https://huggingface.co/datasets/hf-internal-testing/diffusers-images
where it would be extremely nice to have direct access to the filename both visually on the datasets page (@severo ) as well as via the `load_datasets` function. We currently have some akward functionality to download images by path name: https://github.com/huggingface/diffusers/blob/2fb8fafa4b761f6fc144cf75a6f6f0ea6af3a1c1/src/diffusers/utils/testing_utils.py#L131
It would be much nicer to just go over `load_dataset(...)`
### Motivation
Intuitively the filename is something people understand directly. E.g if you upload a folder of images online, it's nice if you recognize the image as well as the filename next to it directly and that you're able to use it right away.
The label on the other hand is less intuitive to understand as you haven't added it yourself.
### Your contribution
Not sure if I have the time to add it myself anytime soon, but it would help us a lot for `diffusers`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5160/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5160/timeline | null | null | null | null | false | [
"Also cc @anton-l ",
"BTW the exact same holds true for the audio folder",
"I'm fine with adding a new column with the file name personally. Not sure how breaking this is though",
"@patrickvonplaten do you mean just filename or full relative path inside the repo?\r\nI think it shouldn't be breaking, at least I cannot come up with any case where it is. Maybe @mariosasko can?\r\n\r\nalso I think that the problem here and in general is that Image/AudioFolder has default configuration which implies automatic label creation if there is not metadata file. It can be changed when you load the dataset with `load_dataset` but not on it's Hub page. \r\n\r\n",
"> also I think that the problem here and in general Image/AudioFolder has default configuration which implies automatic label creation if there is not metadata file\r\n\r\nYea I agree it's often the wrong default. We can also imagine adding the builder's parameters as YAML in the repo.",
"@lhoestq yes I also got the idea of some YAML config! not sure of what priority it is though.",
"but it would actually also solve this issue: https://github.com/huggingface/datasets/issues/5153",
"I meant just the file name (no path) that would already be super helpful IMO :-) (maybe dir+filename if there are dirs in the folder)",
"@patrickvonplaten one more time, to be sure I understand you.\r\nFor example, we have data structure like this:\r\n```\r\nββ data/\r\nβ ββ subdir/\r\nβ βββ cats/\r\nβ βββ 0.jpg\r\nβ βββ 1.jpg\r\nβ βββ 2.jpg\r\nβ βββ dogs/\r\nβ βββ 0.jpg\r\nβ βββ 1.jpg\r\nβ βββ 2.jpg\r\nβββ another_subdir/\r\n βββ 10.jpg\r\n βββ 11.jpg\r\n βββ 12.jpg\r\n```\r\nIs it okay to provide `\"data/subdir/cats/0.jpg\"`, `\"data/subdir/dogs/0.jpg\"`, `\"data/another_subdir/10.jpg\"`?\r\nI think providing just filenames might be confusing if they are not unique, as in this example. ",
"Yes I think the relative path as you proposed makes a lot of sense :-) "
] |
https://api.github.com/repos/huggingface/datasets/issues/2168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2168/comments | https://api.github.com/repos/huggingface/datasets/issues/2168/events | https://github.com/huggingface/datasets/pull/2168 | 849,957,941 | MDExOlB1bGxSZXF1ZXN0NjA4NjA4Nzg5 | 2,168 | Preserve split type when realoding dataset | [] | closed | false | null | 5 | 2021-04-04T20:46:21Z | 2021-04-19T10:57:05Z | 2021-04-19T09:08:55Z | null | Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction
from . import splits # gives us access to NamedSplit
```
and then define the `eval` globals as follows:
```python
{**arrow_reader.__dict__, **splits.__dict__}
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2168/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2168/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2168.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2168",
"merged_at": "2021-04-19T09:08:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2168.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2168"
} | true | [
"Thanks for diving into this !\r\n\r\nBefore going further, I just want to make sure if using `eval` is the right solution\r\nPersonally I'm not a big fan of `eval` since it has many security concerns. Also storing string representations of python objects in the json files is not ideal either IMO, so maybe it's possible to change this aspect instead.\r\n\r\nMaybe it would be better to convert the `_RelativeInstruction` to a string (or \"specs\") ?\r\nIt looks like `ReadInstruction.from_spec` already exists, but not the other way around.\r\nThe specs are the string representation of instructions. For example: `train+validation[:50%]`.\r\n\r\nLet me know what you think ! And thanks again, this issue has been here for a while now ^^",
"@lhoestq Yes, before going with `eval`, I thought about this approach with the \"spec\". The only issue with this approach is that we have to come up with a represenation for the `rounding` arg.\r\n\r\nWhat do you think about this (maybe too verbose)?\r\n```python\r\n>>> print(ReadInstruction(\"train\", rounding=\"pct1_dropremainder\", from_=10, to=30).to_spec())\r\ntrain[10:30](pct1_dropremainder)",
"Good idea !\r\n\r\nFirst we must note that the rounding is only used for percentage instructions.\r\nFor absolute instructions there's no rounding ambiguity.\r\n\r\nBy default the rounding is set to `closest`. For example if you have a train set of 999 examples and if you provide an instruction spec `\"train[:1%]\"`, you're going to get the first ten examples (while the `pct1_dropremainder ` rounding would return 9 examples).\r\n\r\nCurrently there's no way to get an instruction with a `pct1_dropremainder` rounding strategy from an instruction spec.\r\nSo we can either drop the support of `pct1_dropremainder` or define a way to use this strategy from a spec.\r\nI don't think dropping `pct1_dropremainder` would be a good idea since it allows to load each percent to all have the same number of examples (even the last one). Therefore I think your suggestion makes total sense and we should add a representation of this rounding strategy.\r\n\r\nI like what you suggested `train[10%:30%](pct1_dropremainder)` is fine, and it seems compatible with the regex that parses the instructions specs.",
"@lhoestq I've made the changes as you suggested. Ready for the review.",
"@lhoestq I've added a test and addressed the comments.\r\n\r\nAdditionally, `ReadInstruction` is converted to its spec form in `builder.py` to avoid a circular import that would happen if this logic was in `arrow_reader.py`. If you think it's better to have this logic in `arrow_reader.py`, the import can be delayed by putting it inside a function."
] |
https://api.github.com/repos/huggingface/datasets/issues/3563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3563/comments | https://api.github.com/repos/huggingface/datasets/issues/3563/events | https://github.com/huggingface/datasets/issues/3563 | 1,099,070,368 | I_kwDODunzps5Bgnug | 3,563 | Dataset.from_pandas preserves useless index | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-01-11T12:07:07Z | 2022-01-12T16:11:27Z | 2022-01-12T16:11:27Z | null | ## Describe the bug
Let's say that you want to create a Dataset object from pandas dataframe. Most likely you will write something like this:
```
import pandas as pd
from datasets import Dataset
df = pd.read_csv('some_dataset.csv')
# Some DataFrame preprocessing code...
dataset = Dataset.from_pandas(df)
```
If your preprocessing code contain indexing operations like this:
```
df = df[df.col1 == some_value]
```
then your df.index can be changed from (default) ```RangeIndex(start=0, stop=16590, step=1)``` to something like this ```Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8,
9,
...
83979, 83980, 83981, 83982, 83983, 83984, 83985, 83986, 83987,
83988],
dtype='int64', length=16590)```
In this case, PyArrow (by default) will preserve this non-standard index. In the result, your dataset object will have the extra field that you likely don't want to have: '__index_level_0__'.
You can easily fix this by just adding extra argument ```preserve_index=False``` to call of ```InMemoryTable.from_pandas``` in ```arrow_dataset.py```.
If you approve that this isn't desirable behavior, I can make a PR fixing that.
## Environment info
- `datasets` version: 1.16.1
- Platform: Linux-5.11.0-44-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3563/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3563/timeline | null | completed | null | null | false | [
"Hi! That makes sense. Sure, feel free to open a PR! Just a small suggestion: let's make `preserve_index` a parameter of `Dataset.from_pandas` (which we then pass to `InMemoryTable.from_pandas`) with `None` as a default value to not have this as a breaking change. "
] |
https://api.github.com/repos/huggingface/datasets/issues/2805 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2805/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2805/comments | https://api.github.com/repos/huggingface/datasets/issues/2805/events | https://github.com/huggingface/datasets/pull/2805 | 971,436,456 | MDExOlB1bGxSZXF1ZXN0NzEzMTc3MTI4 | 2,805 | Fix streaming zip files from canonical datasets | [] | closed | false | null | 0 | 2021-08-16T07:11:40Z | 2021-08-16T10:34:00Z | 2021-08-16T10:34:00Z | null | Previous PR #2798 fixed streaming remote zip files when passing the parameter `data_files`.
However, that broke streaming zip files used in canonical `datasets` scripts, which normally have a subsequent `join()` (patched with `xjoin()`) after the `StreamingDownloadManager.download_and_extract()` is called.
This PR fixes this issue and allows streaming zip files both from:
- canonical datasets scripts and
- data files. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2805/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2805/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2805.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2805",
"merged_at": "2021-08-16T10:34:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2805.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2805"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3151/comments | https://api.github.com/repos/huggingface/datasets/issues/3151/events | https://github.com/huggingface/datasets/pull/3151 | 1,033,890,501 | PR_kwDODunzps4tkL7t | 3,151 | Re-add faiss to windows testing suite | [] | closed | false | null | 0 | 2021-10-22T19:34:29Z | 2021-11-02T10:47:34Z | 2021-11-02T10:06:03Z | null | In recent versions, `faiss-cpu` seems to be available for Windows as well. See the [PyPi page](https://pypi.org/project/faiss-cpu/#files) to confirm. We can therefore included it for Windows in the setup file.
At first tests didn't pass due to problems with permissions as caused by `NamedTemporaryFile` on Windows. This built-in library is notoriously poor in playing nice on Windows. The required change isn't pretty, but it works. First set `delete=False` to not automatically try to delete the file on `exit`. Then, manually delete the file with `unlink`. It's weird, I know, but it works.
```python
with tempfile.NamedTemporaryFile(delete=False) as tmp_file:
# do stuff
os.unlink(tmp_file.name)
```
closes #3150 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3151/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3151",
"merged_at": "2021-11-02T10:06:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3151"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/935/comments | https://api.github.com/repos/huggingface/datasets/issues/935/events | https://github.com/huggingface/datasets/pull/935 | 753,863,055 | MDExOlB1bGxSZXF1ZXN0NTI5ODU5MjM4 | 935 | add PIB dataset | [] | closed | false | null | 4 | 2020-11-30T22:55:43Z | 2020-12-01T23:17:11Z | 2020-12-01T23:17:11Z | null | This pull request will add PIB dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/935/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/935/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/935.diff",
"html_url": "https://github.com/huggingface/datasets/pull/935",
"merged_at": "2020-12-01T23:17:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/935.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/935"
} | true | [
"Hi, \r\n\r\nI am unable to get success in these tests. Can someone help me by pointing out possible errors?\r\n\r\nThanks",
"Hi ! you can read the tests by logging in to circleci.\r\n\r\nAnyway for information here are the errors : \r\n```\r\ndatasets/pib/pib.py:19:1: F401 'csv' imported but unused\r\ndatasets/pib/pib.py:20:1: F401 'json' imported but unused\r\ndatasets/pib/pib.py:36:84: W291 trailing whitespace\r\n```\r\nand \r\n```\r\nFAILED tests/test_file_encoding.py::TestFileEncoding::test_no_encoding_on_file_open\r\n```\r\n\r\nTo fix the `test_no_encoding_on_file_open` you just have to specify an encoding while opening a text file. For example `encoding=\"utf-8\"`\r\n",
"All suggested changes are done.",
"Nice ! can you re-generate the dataset_infos.json file to take into account the feature type change ?\r\n```\r\ndatasets-cli test ./datasets/pib --save_infos --all_configs --ignore_verifications\r\n```\r\nAnd also format your code ?\r\n```\r\nmake style\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/1403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1403/comments | https://api.github.com/repos/huggingface/datasets/issues/1403/events | https://github.com/huggingface/datasets/pull/1403 | 760,571,419 | MDExOlB1bGxSZXF1ZXN0NTM1MzgxMzQ3 | 1,403 | Add dataset clickbait_news_bg | [] | closed | false | null | 1 | 2020-12-09T18:32:12Z | 2020-12-10T09:16:44Z | 2020-12-10T09:16:43Z | null | Adding a new dataset - clickbait_news_bg | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1403/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1403/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1403.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1403",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1403.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1403"
} | true | [
"Closing this pull request, will submit a new one for this dataset."
] |
https://api.github.com/repos/huggingface/datasets/issues/873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/873/comments | https://api.github.com/repos/huggingface/datasets/issues/873/events | https://github.com/huggingface/datasets/issues/873 | 747,959,523 | MDU6SXNzdWU3NDc5NTk1MjM= | 873 | load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error | [] | closed | false | null | 12 | 2020-11-21T06:30:45Z | 2022-05-05T07:19:59Z | 2020-11-22T12:18:05Z | null | ```
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
Stack trace:
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-6-2e06a8332652> in <module>()
1 from datasets import load_dataset
----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0')
5 frames
/usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs)
608 download_config=download_config,
609 download_mode=download_mode,
--> 610 ignore_verifications=ignore_verifications,
611 )
612
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs)
513 if not downloaded_from_gcs:
514 self._download_and_prepare(
--> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
516 )
517 # Sync info
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
568 split_dict = SplitDict(dataset_name=self.name)
569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
571
572 # Checksums verification
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager)
252 def _split_generators(self, dl_manager):
253 dl_paths = dl_manager.download_and_extract(_DL_URLS)
--> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN)
255 # Generate shared vocabulary
256
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split)
153 else:
154 logging.fatal("Unsupported split: %s", split)
--> 155 cnn = _find_files(dl_paths, "cnn", urls)
156 dm = _find_files(dl_paths, "dm", urls)
157 return cnn + dm
/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)
132 else:
133 logging.fatal("Unsupported publisher: %s", publisher)
--> 134 files = sorted(os.listdir(top_dir))
135
136 ret_files = []
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
```
I have ran the code on Google Colab | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/873/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/873/timeline | null | completed | null | null | false | [
"I get the same error. It was fixed some days ago, but again it appears",
"Hi @mrm8488 it's working again today without any fix so I am closing this issue.",
"I see the issue happening again today - \r\n\r\n[nltk_data] Downloading package stopwords to /root/nltk_data...\r\n[nltk_data] Package stopwords is already up-to-date!\r\nDownloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602...\r\n\r\n---------------------------------------------------------------------------\r\n\r\nNotADirectoryError Traceback (most recent call last)\r\n\r\n<ipython-input-9-cd4bf8bea840> in <module>()\r\n 22 \r\n 23 \r\n---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train')\r\n 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation')\r\n 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test')\r\n\r\n5 frames\r\n\r\n/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict)\r\n 132 else:\r\n 133 logging.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 134 files = sorted(os.listdir(top_dir))\r\n 135 \r\n 136 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n\r\nCan someone please take a look ?",
"Sometimes happens. Try in a while",
"It is working now, thank you. ",
"Has anyone solved this ? I still get this error ",
"> atal(\"Unsupported publisher: %s\", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = []\r\n> \r\n> NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n> \r\n> Can someone please take a look ?\r\n\r\n2 short-term workarounds:\r\n\r\n1. Use this line instead `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`. [In a related issue](https://github.com/huggingface/datasets/issues/996#issuecomment-997343101), this person mentioned another data source copy that just works.\r\n2. Use the same data source, but edit the urls. Instead of google drive quota problems mentioned in #996, I was getting the \"can't scan this file for viruses\" problem, which results in that prompted html getting downloaded instead of the files. You can get around this by:\r\n 1. Look at the traceback and find out where `cnn_dailymail.py` is on your computer.\r\n 2. Edit the `cnn_stories` and `dm_stories` url's by adding the following to the end of them `&confirm=t`. This should be around line 67.\r\n 3. You may have to remove those confirmation html files in your download directory (`~/.cache/huggingface/datasets/downloads` for me) so that they don't get in the way of the new download attempts.\r\n\r\nEither method works for me. I would've made a PR, but not sure if they want to go with the new ccdv/cnn_dailymail source or not.",
"experience the same problem, ccdv/cnn_dailymail not working either. \r\n\r\nSolve this problem by installing datasets library from the master branch:\r\npython -m pip install git+https://github.com/huggingface/datasets.git@master",
"Seem to be getting this again even with 1.18.4. I believe it worked yesterday.",
"Hitting this one as well.",
">Hitting this one as well.\r\n\r\nHas anyone solved this ? I still get this error",
"@yoheimiyamoto The solution provided by @davidshinn (i.e. `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`) worked for me.",
"> > atal(\"Unsupported publisher: %s\", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = []\r\n> > NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n> > Can someone please take a look ?\r\n> \r\n> 2 short-term workarounds:\r\n> \r\n> 1. Use this line instead `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`. [In a related issue](https://github.com/huggingface/datasets/issues/996#issuecomment-997343101), this person mentioned another data source copy that just works.\r\n> 2. Use the same data source, but edit the urls. Instead of google drive quota problems mentioned in [NotADirectoryError while loading the CNN/Dailymail datasetΒ #996](https://github.com/huggingface/datasets/issues/996), I was getting the \"can't scan this file for viruses\" problem, which results in that prompted html getting downloaded instead of the files. You can get around this by:\r\n> \r\n> 1. Look at the traceback and find out where `cnn_dailymail.py` is on your computer.\r\n> 2. Edit the `cnn_stories` and `dm_stories` url's by adding the following to the end of them `&confirm=t`. This should be around line 67.\r\n> 3. You may have to remove those confirmation html files in your download directory (`~/.cache/huggingface/datasets/downloads` for me) so that they don't get in the way of the new download attempts.\r\n> \r\n> Either method works for me. I would've made a PR, but not sure if they want to go with the new ccdv/cnn_dailymail source or not.\r\n\r\nThankyou, editing the urls helped me than the loading dataset line."
] |
https://api.github.com/repos/huggingface/datasets/issues/333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/333/comments | https://api.github.com/repos/huggingface/datasets/issues/333/events | https://github.com/huggingface/datasets/pull/333 | 649,236,516 | MDExOlB1bGxSZXF1ZXN0NDQyOTE1NDQ0 | 333 | fix variable name typo | [] | closed | false | null | 2 | 2020-07-01T19:13:50Z | 2020-07-24T15:43:31Z | 2020-07-24T08:32:16Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/333/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/333/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/333.diff",
"html_url": "https://github.com/huggingface/datasets/pull/333",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/333.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/333"
} | true | [
"Good catch :)\r\nI think there is another occurence that needs to be fixed in the second gist (line 4924 of the notebook file):\r\n```python\r\nbleu = nlp.load_metric(...)\r\n```",
"Was fixed in e16f79b5f7fc12a6a30c777722be46897a272e6f\r\nClosing it."
] |
|
https://api.github.com/repos/huggingface/datasets/issues/751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/751/comments | https://api.github.com/repos/huggingface/datasets/issues/751/events | https://github.com/huggingface/datasets/issues/751 | 726,820,191 | MDU6SXNzdWU3MjY4MjAxOTE= | 751 | Error loading ms_marco v2.1 using load_dataset() | [] | closed | false | null | 3 | 2020-10-21T19:54:43Z | 2020-11-05T01:31:57Z | 2020-11-05T01:31:57Z | null | Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
9
10 # Downloading and loading a dataset
---> 11 dataset = load_dataset('ms_marco', 'v2.1')
10 frames
/usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx)
353 """
354 try:
--> 355 obj, end = self.scan_once(s, idx)
356 except StopIteration as err:
357 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Unterminated string starting at: line 1 column 388988661 (char 388988660)
`
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/751/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/751/timeline | null | completed | null | null | false | [
"There was a similar issue in #294 \r\nClearing the cache and download again the dataset did the job. Could you try to clear your cache and download the dataset again ?",
"I was able to load the dataset successfully, I'm pretty sure it's just a cache issue that you have.\r\nLet me know if clearing your cache fixes the problem",
"Yes, it indeed was a cache issue!\r\nThanks for reaching out!!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3857/comments | https://api.github.com/repos/huggingface/datasets/issues/3857/events | https://github.com/huggingface/datasets/issues/3857 | 1,162,525,353 | I_kwDODunzps5FSrqp | 3,857 | Order of dataset changes due to glob.glob. | [
{
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library",
"id": 2067400324,
"name": "generic discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion"
}
] | open | false | null | 1 | 2022-03-08T11:10:30Z | 2022-03-14T11:08:22Z | null | null | ## Describe the bug
After discussion with @lhoestq, just want to mention here that `glob.glob(...)` should always be used in combination with `sorted(...)` to make sure the list of files returned by `glob.glob(...)` doesn't change depending on the OS system.
There are currently multiple datasets that use `glob.glob()` without making use of `sorted(...)` even the streaming download manager (if I'm not mistaken):
https://github.com/huggingface/datasets/blob/c14bfeb4af89da14f870de5ddaa584b08aa08eeb/src/datasets/utils/streaming_download_manager.py#L483 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3857/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3857/timeline | null | null | null | null | false | [
"I agree using `glob.glob` alone is bad practice because it's not deterministic. Using `sorted` is a nice solution.\r\n\r\nNote that the `xglob` function you are referring to in the `streaming_download_manager.py` code just extends `glob.glob` for URLs - we don't change its behavior. That's why it has no `sorted()`"
] |
https://api.github.com/repos/huggingface/datasets/issues/3784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3784/comments | https://api.github.com/repos/huggingface/datasets/issues/3784/events | https://github.com/huggingface/datasets/issues/3784 | 1,150,057,955 | I_kwDODunzps5EjH3j | 3,784 | Unable to Download CNN-Dailymail Dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2022-02-25T05:24:47Z | 2022-03-03T14:05:17Z | 2022-03-03T14:05:17Z | null | ## Describe the bug
I am unable to download the CNN-Dailymail dataset. Upon closer investigation, I realised why this was happening:
- The dataset sits in Google Drive, and both the CNN and DM datasets are large.
- Google is unable to scan the folder for viruses, **so the link which would originally download the dataset, now downloads the source code of this web page:**

- **This leads to the following error**:
```python
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
```
## Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train")
```
## Expected results
That the dataset is downloaded and processed just like other datasets.
## Actual results
Hit with this error:
```python
NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3784/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3784/timeline | null | completed | null | null | false | [
"#self-assign",
"@AngadSethi thanks for reporting and thanks for your PR!",
"Glad to help @albertvillanova! Just fine-tuning the PR, will comment once I am able to get it up and running π",
"Fixed by:\r\n- #3787"
] |
https://api.github.com/repos/huggingface/datasets/issues/1220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1220/comments | https://api.github.com/repos/huggingface/datasets/issues/1220/events | https://github.com/huggingface/datasets/pull/1220 | 758,015,894 | MDExOlB1bGxSZXF1ZXN0NTMzMjYxNTgw | 1,220 | add Korean HateSpeech dataset | [] | closed | false | null | 5 | 2020-12-06T20:31:29Z | 2020-12-08T15:21:09Z | 2020-12-08T11:05:42Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1220/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1220/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1220.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1220",
"merged_at": "2020-12-08T11:05:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1220.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1220"
} | true | [
"It looks like you forgot to `make style` (I forget it a lot too π€¦ )\r\n+ add dummy data",
"hi @cceyda π, thanks for the hint! it looks like i've run into some other errors though in `_split_generators` or `_generate_examples`. do you have any idea of what's wrong here? π
",
"I get the same errors on another pr too, so it probably has something to do with circleci, waiting on help.",
"the `RemoteDatasetTest ` error on the CI is fixed on master so it's fine",
"merging since the CI is fixed on master"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/5034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5034/comments | https://api.github.com/repos/huggingface/datasets/issues/5034/events | https://github.com/huggingface/datasets/pull/5034 | 1,388,855,136 | PR_kwDODunzps4_wJCu | 5,034 | Update README.md of yahoo_answers_topics dataset | [] | closed | false | null | 4 | 2022-09-28T07:17:33Z | 2022-10-06T15:56:05Z | 2022-10-04T13:49:25Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5034/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5034/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5034.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5034",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5034.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5034"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5034). All of your documentation changes will be reflected on that endpoint.",
"Thanks, @borgr. We have removed all dataset scripts from this repo. Subsequent PRs should be opened directly on the Hugging Face Hub.",
"Do you mean to edit through \"edit dataset card\" button? because it just leads to a broken page...\r\nhttps://huggingface.co/datasets/yahoo_answers_topics\r\n\r\nhttps://github.com/huggingface/datasets/tree/main/datasets/yahoo_answers_topics",
"Hi @borgr, good catch! I'm going to report the button leading to a broken link.\r\n\r\nIn the meantime, you can propose a PR to the `README.md` file using this link: https://huggingface.co/datasets/yahoo_answers_topics/blob/main/README.md"
] |
https://api.github.com/repos/huggingface/datasets/issues/1407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1407/comments | https://api.github.com/repos/huggingface/datasets/issues/1407/events | https://github.com/huggingface/datasets/pull/1407 | 760,581,756 | MDExOlB1bGxSZXF1ZXN0NTM1Mzg5ODQx | 1,407 | Add Tweet Eval Dataset | [] | closed | false | null | 4 | 2020-12-09T18:48:57Z | 2021-02-26T08:54:04Z | 2021-02-26T08:54:04Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1407/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1407/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/1407.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1407",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1407.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1407"
} | true | [
"Hi @lhoestq,\r\n\r\nSeeing that it has been almost two months to this draft, I'm willing to take this forward if you and @abhishekkrthakur don't mind. :)",
"Hi @gchhablani !\r\nSure if @abhishekkrthakur doesn't mind\r\nThanks for your help :)",
"Please feel free :) ",
"Hi @lhoestq, @abhishekkrthakur \r\n\r\nI believe this can be closed. Merged in #1829."
] |
|
https://api.github.com/repos/huggingface/datasets/issues/522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/522/comments | https://api.github.com/repos/huggingface/datasets/issues/522/events | https://github.com/huggingface/datasets/issues/522 | 682,478,833 | MDU6SXNzdWU2ODI0Nzg4MzM= | 522 | dictionnary typo in docs | [] | closed | false | null | 1 | 2020-08-20T07:11:05Z | 2020-08-20T07:52:14Z | 2020-08-20T07:52:13Z | null | Many places dictionary is spelled dictionnary, not sure if its on purpose or not.
Fixed in this pr:
https://github.com/huggingface/nlp/pull/521 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/522/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/522/timeline | null | completed | null | null | false | [
"Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/89 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/89/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/89/comments | https://api.github.com/repos/huggingface/datasets/issues/89/events | https://github.com/huggingface/datasets/pull/89 | 617,295,069 | MDExOlB1bGxSZXF1ZXN0NDE3MjM4MjU4 | 89 | Add list and inspect methods - cleanup hf_api | [] | closed | false | null | 0 | 2020-05-13T09:30:15Z | 2020-05-13T14:05:00Z | 2020-05-13T09:33:10Z | null | Add a bunch of methods to easily list and inspect the processing scripts up-loaded on S3:
```python
nlp.list_datasets()
nlp.list_metrics()
# Copy and prepare the scripts at `local_path` for easy inspection/modification.
nlp.inspect_dataset(path, local_path)
# Copy and prepare the scripts at `local_path` for easy inspection/modification.
nlp.inspect_metric(path, local_path)
```
Also clean up the `HfAPI` to use `dataclasses` for better user-experience | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/89/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/89/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/89.diff",
"html_url": "https://github.com/huggingface/datasets/pull/89",
"merged_at": "2020-05-13T09:33:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/89.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/89"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2724/comments | https://api.github.com/repos/huggingface/datasets/issues/2724/events | https://github.com/huggingface/datasets/issues/2724 | 954,919,607 | MDU6SXNzdWU5NTQ5MTk2MDc= | 2,724 | 404 Error when loading remote data files from private repo | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-07-28T14:24:23Z | 2021-07-29T04:58:49Z | 2021-07-28T16:38:01Z | null | ## Describe the bug
When loading remote data files from a private repo, a 404 error is raised.
## Steps to reproduce the bug
```python
url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset")
dset = load_dataset("json", data_files=url, use_auth_token=True)
# HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/datasets/lewtun/asr-preds-test/resolve/main/preds.jsonl
```
## Expected results
Load dataset.
## Actual results
404 Error.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2724/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2724/timeline | null | completed | null | null | false | [
"I guess the issue is when computing the ETags of the remote files. Indeed `use_auth_token` must be passed to `request_etags` here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/35b5e4bc0cb2ed896e40f3eb2a4aa3de1cb1a6c5/src/datasets/builder.py#L160-L160",
"Yes, I remember having properly implemented that: \r\n- https://github.com/huggingface/datasets/commit/7a9c62f7cef9ecc293f629f859d4375a6bd26dc8#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R160\r\n- https://github.com/huggingface/datasets/pull/2628/commits/6350a03b4b830339a745f7b1da46ece784ca734c\r\n\r\nBut a subsequent refactoring accidentally removed it...",
"I have opened a PR to fix it @lewtun."
] |
https://api.github.com/repos/huggingface/datasets/issues/1260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1260/comments | https://api.github.com/repos/huggingface/datasets/issues/1260/events | https://github.com/huggingface/datasets/pull/1260 | 758,601,828 | MDExOlB1bGxSZXF1ZXN0NTMzNzQ4ODM3 | 1,260 | Added NewsPH Raw Dataset | [] | closed | false | null | 1 | 2020-12-07T15:17:53Z | 2020-12-08T16:27:15Z | 2020-12-08T16:27:15Z | null | Added the raw version of the NewsPH dataset, which was used to automatically generate the NewsPH-NLI corpus. Dataset of news articles in Filipino from mainstream Philippine news sites on the internet. Can be used as a language modeling dataset or to reproduce the NewsPH-NLI dataset.
Paper: https://arxiv.org/abs/2010.11574
Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1260/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1260/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1260.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1260",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1260.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1260"
} | true | [
"looks like this PR has changes to many files other than the ones for `NewsPH`\r\n\r\nCan you create another branch and another PR please ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/5975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5975/comments | https://api.github.com/repos/huggingface/datasets/issues/5975/events | https://github.com/huggingface/datasets/issues/5975 | 1,768,271,343 | I_kwDODunzps5pZa3v | 5,975 | Streaming Dataset behind Proxy - FileNotFoundError | [] | closed | false | null | 9 | 2023-06-21T19:10:02Z | 2023-06-30T05:55:39Z | 2023-06-30T05:55:38Z | null | ### Describe the bug
When trying to stream a dataset i get the following error after a few minutes of waiting.
```
FileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json
If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
I have already set the proxy environment variables. Downloading a Dataset without streaming works as expected.
Still i suspect that this is connected to being behind a proxy.
Is there a way to set the proxy for streaming datasets? Possibly a keyword argument that gets passed to ffspec?
### Steps to reproduce the bug
This is the code i use.
```
import os
os.environ['http_proxy'] = "http://example.com:xxxx"
os.environ['https_proxy'] = "http://example.com:xxxx"
from datasets import load_dataset
ds = load_dataset("facebook/voxpopuli", name="de", streaming=True)
```
### Expected behavior
I would expect the streaming functionality to use the set proxy settings.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-73-generic-x86_64-with-glibc2.35
- Python version: 3.10.11
- Huggingface_hub version: 0.15.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5975/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5975/timeline | null | completed | null | null | false | [
"Duplicate of #",
"Hi ! can you try to set the upper case environment variables `HTTP_PROXY` and `HTTPS_PROXY` ?\r\n\r\nWe use `aiohttp` for streaming and it uses case sensitive environment variables",
"Hi, thanks for the quick reply.\r\n\r\nI set the uppercase env variables with\r\n\r\n`\r\nos.environ['HTTP_PROXY'] = \"http://example.com:xxxx\" \r\nos.environ['HTTPS_PROXY'] = \"http://example.com:xxxx\" \r\n`\r\n\r\nHowever, I still get the same error.\r\n\r\nOne thing that could be helpfull: When downloading a dataset without streaming i get the following message:\r\n_HF google storage unreachable. Downloading and preparing it from source_.\r\nThe download does however work as expected.\r\n",
"Are you able to use `aiohttp` to get the file at `https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json` using your proxy ?",
"It only works when passing trust_env=True when creating the ClientSession, as well as setting ssl=False.\r\n\r\nWorking Example:\r\n\r\n```\r\nimport os\r\n\r\nos.environ['HTTP_PROXY'] = \"xyz\"\r\nos.environ['HTTPS_PROXY'] = \"xyz\"\r\n\r\nimport asyncio\r\nimport aiohttp\r\n\r\nasync def download_pep(url):\r\n async with aiohttp.ClientSession(trust_env=True) as session:\r\n print(\"1\")\r\n async with session.get(url, ssl=False) as resp:\r\n print(\"2\")\r\n content = await resp.text()\r\n print(content)\r\n return content\r\n\r\nasyncio.run(download_pep(\"https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json\"))\r\n```\r\n\r\n\r\n\r\nSSL Verification has been a problem with other packages as well. Usually I circumvent the problem by setting\r\n```\r\nimport ssl\r\nssl._create_default_https_context = ssl._create_unverified_context\r\n```\r\n(probably not the best idea for security), although here aiohttp does not seem to use this default context.",
"We do pass `trust_env` as well. Could you share the full stack trace you get when streaming using `datasets` ? That could help locate where we might have forgotten to pass `trust_env`",
"Is there a way to disable ssl verification when streaming a dataset. I suspect this might be the isssue with my proxy.\r\n\r\n\r\nHere you go:\r\n\r\n```\r\nFileNotFoundError Traceback (most recent call last)\r\nCell In[8], line 3\r\n 1 from datasets import load_dataset\r\n----> 3 ds = load_dataset(\"facebook/voxpopuli\", name=\"de\", streaming=True)\r\n 5 sample = next(iter(ds))\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/load.py:1790](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/load.py:1790), in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs)\r\n 1788 # Return iterable dataset in case of streaming\r\n 1789 if streaming:\r\n-> 1790 return builder_instance.as_streaming_dataset(split=split)\r\n 1792 # Some datasets are already processed on the HF google storage\r\n 1793 # Don't try downloading from Google storage for the packaged datasets as text, json, csv or pandas\r\n 1794 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/builder.py:1281](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/builder.py:1281), in DatasetBuilder.as_streaming_dataset(self, split, base_path)\r\n 1274 dl_manager = StreamingDownloadManager(\r\n 1275 base_path=base_path or self.base_path,\r\n 1276 download_config=DownloadConfig(use_auth_token=self.use_auth_token, storage_options=self.storage_options),\r\n 1277 dataset_name=self.name,\r\n 1278 data_dir=self.config.data_dir,\r\n 1279 )\r\n 1280 self._check_manual_download(dl_manager)\r\n-> 1281 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)}\r\n 1282 # By default, return all splits\r\n 1283 if split is None:\r\n\r\nFile [~/.cache/huggingface/modules/datasets_modules/datasets/facebook--voxpopuli/b5ff837284f0778eefe0f642734e142d8c3f574eba8c9c8a4b13602297f73604/voxpopuli.py:120](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.cache/huggingface/modules/datasets_modules/datasets/facebook--voxpopuli/b5ff837284f0778eefe0f642734e142d8c3f574eba8c9c8a4b13602297f73604/voxpopuli.py:120), in Voxpopuli._split_generators(self, dl_manager)\r\n 118 def _split_generators(self, dl_manager):\r\n 119 n_shards_path = dl_manager.download_and_extract(_N_SHARDS_FILE)\r\n--> 120 with open(n_shards_path) as f:\r\n 121 n_shards = json.load(f)\r\n 123 if self.config.name == \"en_accented\":\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/streaming.py:71](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/streaming.py:71), in extend_module_for_streaming..wrap_auth..wrapper(*args, **kwargs)\r\n 69 @wraps(function)\r\n 70 def wrapper(*args, **kwargs):\r\n---> 71 return function(*args, use_auth_token=use_auth_token, **kwargs)\r\n\r\nFile [~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:517](https://vscode-remote+ssh-002dremote-002bml-002er-002dsoftware-002eat.vscode-resource.vscode-cdn.net/home/wrsbri/projects/audio_course/~/.conda/envs/audio_hf/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:517), in xopen(file, mode, use_auth_token, *args, **kwargs)\r\n 515 except FileNotFoundError:\r\n 516 if file.startswith(config.HF_ENDPOINT):\r\n--> 517 raise FileNotFoundError(\r\n 518 file + \"\\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\"\r\n 519 ) from None\r\n 520 else:\r\n 521 raise\r\n\r\nFileNotFoundError: https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/data/n_files.json\r\nIf the repo is private or gated, make sure to log in with `huggingface-cli login`.\r\n```",
"> Is there a way to disable ssl verification when streaming a dataset.\r\n\r\nI don't think so.\r\n\r\nWe use `fsspec` HTTPFileSystem implementation that is based on `aiohttp`. If you register a subclass of HTTPFileSystem that has SSL disabled by default it could work, but I wouldn't recommended it because it can raise security issues.",
"Okay thanks for your help! I guess I have to figure out how to improve the proxy environment / see if I can make it work with ssl connections."
] |
https://api.github.com/repos/huggingface/datasets/issues/5061 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5061/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5061/comments | https://api.github.com/repos/huggingface/datasets/issues/5061/events | https://github.com/huggingface/datasets/issues/5061 | 1,395,476,770 | I_kwDODunzps5TLUki | 5,061 | `_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 6 | 2022-10-03T23:51:38Z | 2023-07-21T14:43:35Z | 2023-07-21T14:43:34Z | null | ## Describe the bug
When I `map` with multiple processes, this error occurs. The `.name` of the `logger` that fails to pickle in the final line is `datasets.fingerprint`.
```
File "~/project/dataset.py", line 204, in <dictcomp>
split: dataset.map(
File ".../site-packages/datasets/arrow_dataset.py", line 2489, in map
transformed_shards[index] = async_result.get()
File ".../site-packages/multiprocess/pool.py", line 771, in get
raise self._value
File ".../site-packages/multiprocess/pool.py", line 537, in _handle_tasks
put(task)
File ".../site-packages/multiprocess/connection.py", line 214, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File ".../site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File ".../site-packages/dill/_dill.py", line 620, in dump
StockPickler.dump(self, obj)
File ".../pickle.py", line 487, in dump
self.save(obj)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 902, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File ".../pickle.py", line 717, in save_reduce
save(state)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 887, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict
StockPickler.save_dict(pickler, obj)
File ".../pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1140, in _save_with_postproc
pickler.save_reduce(*reduction, obj=obj)
File ".../pickle.py", line 717, in save_reduce
save(state)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../pickle.py", line 887, in save_tuple
save(element)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1251, in save_module_dict
StockPickler.save_dict(pickler, obj)
File ".../pickle.py", line 972, in save_dict
self._batch_setitems(obj.items())
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File ".../site-packages/dill/_dill.py", line 1963, in save_function
_save_with_postproc(pickler, (_create_function, (
File ".../site-packages/dill/_dill.py", line 1154, in _save_with_postproc
pickler._batch_setitems(iter(source.items()))
File ".../pickle.py", line 998, in _batch_setitems
save(v)
File ".../pickle.py", line 578, in save
rv = reduce(self.proto)
File ".../logging/__init__.py", line 1774, in __reduce__
raise pickle.PicklingError('logger cannot be pickled')
_pickle.PicklingError: logger cannot be pickled
```
## Steps to reproduce the bug
Sorry I failed to have a minimal reproducible example, but the offending line on my end is
```python
dataset.map(
lambda examples: self.tokenize(examples), # this doesn't matter, lambda e: [1] * len(...) also breaks. In fact I'm pretty sure it breaks before executing this lambda
batched=True,
num_proc=4,
)
```
This does work when `num_proc=1`, so it's likely a multiprocessing thing.
## Expected results
`map` succeeds
## Actual results
The error trace above.
## Environment info
- `datasets` version: 1.16.1 and 2.5.1 both failed
- Platform: Ubuntu 20.04.4 LTS
- Python version: 3.10.4
- PyArrow version: 9.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5061/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5061/timeline | null | completed | null | null | false | [
"This is maybe related to python 3.10, do you think you could try on 3.8 ?\r\n\r\nIn the meantime we'll keep improving the support for 3.10. Let me add a dedicated CI",
"I did some binary search and seems like the root cause is either `multiprocess` or `dill`. python 3.10 is fine. Specifically:\r\n- `multiprocess==0.70.12.2, dill==0.3.4`: works\r\n- `multiprocess==0.70.12.2, dill==0.3.5.1`: doesn't work\r\n- `multiprocess==0.70.13, dill==0.3.5.1`: doesn't work\r\n- `multiprocess==0.70.13, dill==0.3.4`: can't test, `multiprocess==0.70.13` requires `dill>=0.3.5.1`\r\n\r\nI will pin their versions on my end. I don't have enough knowledge of how python multiprocessing works to debug this, but ideally there could be a fix. It's also possible that I'm doing something wrong in my code, but again the `.name` of the logger that failed to pickle is `datasets.fingerprint`, which I'm not using directly.",
"Do you know which logger fails at being pickled ?",
"I'm not 100% sure how to figure it out -- the stack trace above doesn't clearly give me a place where I can print out who owns the logger, etc. I only found out its `.name` is `datasets.fingerprint` by printing right before\r\n```\r\n File \".../logging/__init__.py\", line 1774, in __reduce__\r\n raise pickle.PicklingError('logger cannot be pickled')\r\n```\r\nIf you have any idea on how to find it out, please let me know.",
"Ok I see, not sure why it triggers this error though, in `logging.py` the code is\r\n\r\nhttps://github.com/python/cpython/blob/c9da063e32725a66495e4047b8a5ed13e72d9e8e/Lib/logging/__init__.py#L1769-L1775\r\n\r\nand on my side it works on 3.10 with dill 0.3.5.1 and multiprocess 0.70.13\r\n```python\r\n>>> datasets.fingerprint.logger.__reduce__() \r\n(<function logging.getLogger(name=None)>, ('datasets.fingerprint',))\r\n```\r\nCould you try to run this code ?\r\n\r\nAre you in an environment where the loggers are instantiated differently ? Can you check the source code of `logging.Logger.__reduce__` in `\".../logging/__init__.py\", line 1774` ?",
"Closing due to inactivity."
] |
https://api.github.com/repos/huggingface/datasets/issues/2258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2258/comments | https://api.github.com/repos/huggingface/datasets/issues/2258/events | https://github.com/huggingface/datasets/pull/2258 | 866,870,588 | MDExOlB1bGxSZXF1ZXN0NjIyNjcxNTQy | 2,258 | Fix incorrect update_metadata_with_features calls in ArrowDataset | [] | closed | false | null | 1 | 2021-04-25T00:48:38Z | 2021-04-26T17:16:30Z | 2021-04-26T16:54:04Z | null | Fixes bugs in the `unpdate_metadata_with_features` calls (caused by changes in #2151) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2258/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2258/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2258.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2258",
"merged_at": "2021-04-26T16:54:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2258.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2258"
} | true | [
"@lhoestq Maybe a test that runs the functions that call `update_metadata_with_features` and checks if metadata was updated would be nice to prevent this from happening in the future."
] |
https://api.github.com/repos/huggingface/datasets/issues/3908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3908/comments | https://api.github.com/repos/huggingface/datasets/issues/3908/events | https://github.com/huggingface/datasets/pull/3908 | 1,168,576,963 | PR_kwDODunzps40Z_9F | 3,908 | Update README.md for SQuAD v2 metric | [] | closed | false | null | 1 | 2022-03-14T15:53:10Z | 2022-03-15T17:04:11Z | 2022-03-15T17:04:11Z | null | Putting "Values from popular papers" as a subsection of "Output values" | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3908/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3908/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3908.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3908",
"merged_at": "2022-03-15T17:04:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3908.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3908"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3908). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/4001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4001/comments | https://api.github.com/repos/huggingface/datasets/issues/4001/events | https://github.com/huggingface/datasets/issues/4001 | 1,179,231,418 | I_kwDODunzps5GSaS6 | 4,001 | How to use generate this multitask dataset for SQUAD? I am getting a value error. | [] | closed | false | null | 4 | 2022-03-24T09:21:51Z | 2022-03-26T09:48:21Z | 2022-03-26T03:35:43Z | null | ## Dataset viewer issue for 'squad_multitask*'
**Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask
*short description of the issue*
I am trying to generate the multitask dataset for squad dataset. However, gives the error in dataset explorer as well as my local machine.
I tried the command: dataset = load_dataset("vershasaxena91/squad_multitask", 'highlight_qg_format')
Error:
Status code: 400
Exception: TypeError
Message: argument of type 'Value' is not iterable
Kindly advice.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4001/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4001/timeline | null | completed | null | null | false | [
"Hi! Replacing `nlp.<obj>` with `datasets.<obj>` in the script should fix the problem. `nlp` has been renamed to `datasets` more than a year ago, so please use `datasets` instead to avoid weird issues.",
"Thank You! Was able to solve with the help of this.",
"But I request you to please fix the same in the dataset hub explorer as well...",
"May I ask how to get this dataset?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2689/comments | https://api.github.com/repos/huggingface/datasets/issues/2689/events | https://github.com/huggingface/datasets/issues/2689 | 949,447,104 | MDU6SXNzdWU5NDk0NDcxMDQ= | 2,689 | cannot save the dataset to disk after rename_column | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-07-21T08:13:40Z | 2021-07-21T13:11:04Z | 2021-07-21T13:11:04Z | null | ## Describe the bug
If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk`
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
In [1]: from datasets import Dataset, load_from_disk
In [5]: dataset=Dataset.from_dict({'foo': [0]})
In [7]: dataset.save_to_disk('foo')
In [8]: dataset=load_from_disk('foo')
In [10]: dataset=dataset.rename_column('foo', 'bar')
In [11]: dataset.save_to_disk('foo')
---------------------------------------------------------------------------
PermissionError Traceback (most recent call last)
<ipython-input-11-a3bc0d4fc339> in <module>
----> 1 dataset.save_to_disk('foo')
/mnt/beegfs/projects/meerqat/anaconda3/envs/meerqat/lib/python3.7/site-packages/datasets/arrow_dataset.py in save_to_disk(self, dataset_path
, fs)
597 if Path(dataset_path, config.DATASET_ARROW_FILENAME) in cache_files_paths:
598 raise PermissionError(
--> 599 f"Tried to overwrite {Path(dataset_path, config.DATASET_ARROW_FILENAME)} but a dataset can't overwrite itself."
600 )
601 if Path(dataset_path, config.DATASET_INDICES_FILENAME) in cache_files_paths:
PermissionError: Tried to overwrite foo/dataset.arrow but a dataset can't overwrite itself.
```
N. B. I created the dataset from dict to enable easy reproduction but the same happens if you load an existing dataset (e.g. starting from `In [8]`)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Linux-3.10.0-1160.11.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2689/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2689/timeline | null | completed | null | null | false | [
"Hi ! That's because you are trying to overwrite a file that is already open and being used.\r\nIndeed `foo/dataset.arrow` is open and used by your `dataset` object.\r\n\r\nWhen you do `rename_column`, the resulting dataset reads the data from the same arrow file.\r\nIn other cases like when using `map` on the other hand, the resulting dataset reads the data from another arrow file that is the result of the map transform.\r\n\r\nTherefore overwriting a dataset after `rename_column` is not possible, but it is possible after `map`, since `rename_column` doesn't switch to using another arrow file (the actual data stay the same).",
"Ok, thanks for clearing it up :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4525 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4525/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4525/comments | https://api.github.com/repos/huggingface/datasets/issues/4525/events | https://github.com/huggingface/datasets/issues/4525 | 1,276,491,386 | I_kwDODunzps5MFbZ6 | 4,525 | Out of memory error on workers while running Beam+Dataflow | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 8 | 2022-06-20T07:28:12Z | 2022-06-30T09:33:57Z | null | null | ## Describe the bug
While running the preprocessing of the natural_question dataset (see PR #4368), there is an issue for the "default" config (train+dev files).
Previously we ran the preprocessing for the "dev" config (only dev files) with success.
Train data files are larger than dev ones and apparently workers run out of memory while processing them.
Any help/hint is welcome!
Error message:
```
Data channel closed, unable to receive additional data from SDK sdk-0-0
```
Info from the Diagnostics tab:
```
Out of memory: Killed process 1882 (python) total-vm:6041764kB, anon-rss:3290928kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:9520kB oom_score_adj:900
The worker VM had to shut down one or more processes due to lack of memory.
```
## Additional information
### Stack trace
```
Traceback (most recent call last):
File "/home/albert_huggingface_co/natural_questions/venv/bin/datasets-cli", line 8, in <module>
sys.exit(main())
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/run_beam.py", line 127, in run
builder.download_and_prepare(
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 1389, in _download_and_prepare
pipeline_results.wait_until_finish()
File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 1667, in wait_until_finish
raise DataflowRuntimeException(
apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error:
Data channel closed, unable to receive additional data from SDK sdk-0-0
```
### Logs
```
Error message from worker: Data channel closed, unable to receive additional data from SDK sdk-0-0
Workflow failed. Causes: S30:train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/Read+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/GroupByWindow+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/FlatMap(restore_timestamps)+train/ReadAllFromText/ReadAllFiles/Reshard/RemoveRandomKeys+train/ReadAllFromText/ReadAllFiles/ReadRange+train/Map(_parse_example)+train/Encode+train/Count N. Examples+train/Get values/Values+train/Save to parquet/Write/WriteImpl/WindowInto(WindowIntoFn)+train/Save to parquet/Write/WriteImpl/WriteBundles+train/Save to parquet/Write/WriteImpl/Pair+train/Save to parquet/Write/WriteImpl/GroupByKey/Write failed., The job failed because a work item has failed 4 times. Look in previous log entries for the cause of each one of the 4 failures. For more information, see https://cloud.google.com/dataflow/docs/guides/common-errors. The work item was attempted on these workers: beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: Data channel closed, unable to receive additional data from SDK sdk-0-0, beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-bwsj Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-5052 Root cause: The worker lost contact with the service.
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4525/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4525/timeline | null | null | null | null | false | [
"Some naive ideas to cope with this:\r\n- enable more RAM on each worker\r\n- force the spanning of more workers\r\n- others?",
"@albertvillanova We were finally able to process the full NQ dataset on our machines using 600 gb with 5 workers. Maybe these numbers will work for you as well.",
"Thanks a lot for the hint, @seirasto.\r\n\r\nI have one question: what runner did you use? Direct, Apache Flink/Nemo/Samza/Spark, Google Dataflow...? Thank you.",
"I asked my colleague who ran the code and he said apache beam.",
"@albertvillanova Since we have already processed the NQ dataset on our machines can we upload it to datasets so the NQ PR can be merged?",
"Maybe @lhoestq can give a more accurate answer as I am not sure about the authentication requirements to upload those files to our cloud bucket.\r\n\r\nAnyway I propose to continue this discussion on the dedicated PR for Natural questions dataset:\r\n- #4368",
"> I asked my colleague who ran the code and he said apache beam.\r\n\r\nHe looked into it further and he just used DirectRunner. @albertvillanova ",
"OK, thank you @seirasto for your hint.\r\n\r\nThat explains why you did not encounter the out of memory error: this only appears when the processing is distributed (on workers memory) and DirectRunner does not distribute the processing (all is done in a single machine). "
] |
https://api.github.com/repos/huggingface/datasets/issues/4570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4570/comments | https://api.github.com/repos/huggingface/datasets/issues/4570/events | https://github.com/huggingface/datasets/issues/4570 | 1,284,846,168 | I_kwDODunzps5MlTJY | 4,570 | Dataset sharding non-contiguous? | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 5 | 2022-06-26T08:34:05Z | 2022-06-30T11:00:47Z | 2022-06-26T14:36:20Z | null | ## Describe the bug
I'm not sure if this is a bug; more likely normal behavior but i wanted to double check.
Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset?
This might be related to this pull request (https://github.com/huggingface/datasets/pull/4466) but I have to admit I did not properly look into the changes made.
## Steps to reproduce the bug
```python
max_shard_size = convert_file_size_to_int('300MB')
dataset_nbytes = dataset.data.nbytes
num_shards = int(dataset_nbytes / max_shard_size) + 1
num_shards = max(num_shards, 1)
print(f"{num_shards=}")
for shard_index in range(num_shards):
shard = dataset.shard(num_shards=num_shards, index=shard_index)
shard.to_parquet(f"tokenized/tokenized-{shard_index:03d}.parquet")
os.listdir('tokenized/')
```
## Expected results
I expected the shards to match the order of the data of the original dataset; i.e. `dataset[10]` being the same as `shard_1[10]` for example
## Actual results
Only the first element is the same; i.e. `dataset[0]` is the same as `shard_1[0]`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31
- Python version: 3.10.4
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4570/timeline | null | completed | null | null | false | [
"This was silly; I was sure I'd looked for a `contiguous` argument, and was certain there wasn't one the first time I looked :smile:\r\n\r\nSorry about that.",
"Hi! You can pass `contiguous=True` to `.shard()` get contiguous shards. More info on this and the default behavior can be found in the [docs](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.shard).\r\n\r\nEDIT: Answered as you closed the thread π ",
"Hahaha I'm sorry; my excuse is: it's Sunday. (Which makes me all the more grateful for your response :smiley: ",
"@mariosasko Sorry for reviving this, but I was curious as to why `contiguous=False` was the default. This might be a personal bias, but I feel that a user would expect the opposite to be the default. :thinking: ",
"This project started as a fork of TFDS, and `contiguous=False` is the default behavior [there](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard)."
] |
https://api.github.com/repos/huggingface/datasets/issues/3783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3783/comments | https://api.github.com/repos/huggingface/datasets/issues/3783/events | https://github.com/huggingface/datasets/pull/3783 | 1,149,256,744 | PR_kwDODunzps4zZ1jR | 3,783 | Support passing str to iter_files | [] | closed | false | null | 1 | 2022-02-24T12:58:15Z | 2022-02-24T16:01:40Z | 2022-02-24T16:01:40Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3783/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3783/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3783.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3783",
"merged_at": "2022-02-24T16:01:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3783.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3783"
} | true | [
"@mariosasko it was indeed while reading that PR, that I remembered this change I wanted to do long ago... π"
] |
https://api.github.com/repos/huggingface/datasets/issues/349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/349/comments | https://api.github.com/repos/huggingface/datasets/issues/349/events | https://github.com/huggingface/datasets/pull/349 | 652,231,571 | MDExOlB1bGxSZXF1ZXN0NDQ1MzQwMTQ1 | 349 | Hyperpartisan news detection | [] | closed | false | null | 2 | 2020-07-07T11:06:37Z | 2020-07-07T20:47:27Z | 2020-07-07T14:57:11Z | null | Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display.
Implementation notes:
- As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to?
- The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data?
- Should we always subclass `nlp.BuilderConfig`?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/349/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/349/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/349.diff",
"html_url": "https://github.com/huggingface/datasets/pull/349",
"merged_at": "2020-07-07T14:57:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/349.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/349"
} | true | [
"Thank you so much for working on this! This is awesome!\r\n\r\nHow much would it help you if we would remove the manual request?\r\n\r\nWe are naturally interested in getting some broad idea of how many people and who are using our dataset. But if you consider hosting the dataset yourself, I would rather remove this small barrier on our side (so that we then still get the download count from your library).",
"This is an interesting aspect indeed!\r\nDo you want to send me an email (see my homepage) and I'll invite you on our slack channel to talk about that?\r\n@ghomasHudson wanna reach out to me as well? I tried to find your email to invite you without success."
] |
https://api.github.com/repos/huggingface/datasets/issues/1045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1045/comments | https://api.github.com/repos/huggingface/datasets/issues/1045/events | https://github.com/huggingface/datasets/pull/1045 | 756,120,760 | MDExOlB1bGxSZXF1ZXN0NTMxNzE2NzIy | 1,045 | Add xitsonga ner corpus | [] | closed | false | null | 1 | 2020-12-03T11:40:48Z | 2020-12-03T17:20:03Z | 2020-12-03T17:19:32Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1045/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1045/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1045.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1045",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1045.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1045"
} | true | [
"Look like this PR includes changes to many other files than the ones related to xitsonga NER.\r\nCould you create another branch and another PR please ?"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/4443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4443/comments | https://api.github.com/repos/huggingface/datasets/issues/4443/events | https://github.com/huggingface/datasets/issues/4443 | 1,259,606,334 | I_kwDODunzps5LFBE- | 4,443 | Dataset Viewer issue for openclimatefix/nimrod-uk-1km | [] | open | false | null | 6 | 2022-06-03T08:17:16Z | 2022-06-07T08:23:52Z | null | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4443/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4443/timeline | null | null | null | null | false | [
"If I understand correctly, this is due to the key `split` missing in the line https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41 of the script.\r\nMaybe @albertvillanova could confirm.",
"I'm having a look.",
"Indeed there are several issues in this dataset loading script.\r\n\r\nThe one pointed out by @severo: for the default configuration \"crops\": https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L244\r\n- The download manager downloads `_URL`\r\n- But `_URL` is not defined: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L41\r\n ```python\r\n _URL = {'train': []}\r\n ```\r\n- Afterwards, for each split, a different key in `_ULR` is used, but it only contains one key: \"train\"\r\n - \"valid\" key: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L260\r\n - \"test key: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/blob/main/nimrod-uk-1km.py#L269\r\n \r\nThese keys do not exist inside `_URL`, thus the error message reported in the viewer: \r\n```\r\nException: KeyError\r\nMessage: 'valid'\r\n```",
"Would anyone want to submit a Hub PR (or open a Discussion for the authors to be aware) to this dataset? https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km",
"Hi, I'm the main author for that dataset, so I'll work on updating it! I was working on debugging some stuff awhile ago, which is what broke it. ",
"I've opened a Discussion page, so that we can ask/answer and propose fixes until the script works properly: https://huggingface.co/datasets/openclimatefix/nimrod-uk-1km/discussions/1\r\n\r\nCC: @julien-c @jacobbieker "
] |
https://api.github.com/repos/huggingface/datasets/issues/506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/506/comments | https://api.github.com/repos/huggingface/datasets/issues/506/events | https://github.com/huggingface/datasets/pull/506 | 679,164,788 | MDExOlB1bGxSZXF1ZXN0NDY3OTkwNjc2 | 506 | fix dataset.map for function without outputs | [] | closed | false | null | 0 | 2020-08-14T13:40:22Z | 2020-08-17T11:24:39Z | 2020-08-17T11:24:38Z | null | As noticed in #505 , giving a function that doesn't return anything in `.map` raises an error because of an unreferenced variable.
I fixed that and added tests.
Thanks @avloss for reporting | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/506/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/506/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/506.diff",
"html_url": "https://github.com/huggingface/datasets/pull/506",
"merged_at": "2020-08-17T11:24:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/506.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/506"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4711 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4711/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4711/comments | https://api.github.com/repos/huggingface/datasets/issues/4711/events | https://github.com/huggingface/datasets/issues/4711 | 1,309,138,570 | I_kwDODunzps5OB96K | 4,711 | Document how to create a dataset loading script for audio/vision | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-07-19T08:03:40Z | 2023-07-25T16:07:52Z | 2023-07-25T16:07:52Z | null | Currently, in our docs for Audio/Vision/Text, we explain how to:
- Load data
- Process data
However we only explain how to *Create a dataset loading script* for text data.
I think it would be useful that we add the same for Audio/Vision as these have some specificities different from Text.
See, for example:
- #4697
- and comment there: https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492
CC: @stevhliu
| {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4711/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4711/timeline | null | completed | null | null | false | [
"I'm closing this issue as both the Audio and Image sections now have a \"Create dataset\" page that contains the info about writing the loading script version of a dataset."
] |
https://api.github.com/repos/huggingface/datasets/issues/1714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1714/comments | https://api.github.com/repos/huggingface/datasets/issues/1714/events | https://github.com/huggingface/datasets/pull/1714 | 782,416,276 | MDExOlB1bGxSZXF1ZXN0NTUxOTc3MDA0 | 1,714 | Adding adversarialQA dataset | [] | closed | false | null | 5 | 2021-01-08T21:46:09Z | 2021-01-13T16:05:24Z | 2021-01-13T16:05:24Z | null | Adding the adversarialQA dataset (https://adversarialqa.github.io/) from Beat the AI (https://arxiv.org/abs/2002.00293) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1714/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1714/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1714.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1714",
"merged_at": "2021-01-13T16:05:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1714.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1714"
} | true | [
"Oh that's a really cool one, we'll review/merge it soon!\r\n\r\nIn the meantime, do you have any specific positive/negative feedback on the process of adding a datasets Max?\r\nDid you follow the instruction in the [detailed step-by-step](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)?",
"Thanks Thom, been a while, hope all is well!\r\n\r\nYes, I followed the step by step instructions and found them pretty straightforward. The only things I wasn't sure of were what should go into the YAML tags field for the dataset card, and whether there was a list of options somewhere (maybe akin to the metrics?) of the possible supported tasks. I found the rest very intuitive and the automated metadata and dummy data generation very handy. Thanks!",
"Good point! pinging @yjernite here so he can improve this part!",
"@maxbartolo cool addition!\r\n\r\nFor the YAML tag, you should use the tagging app we provide to choose from a drop-down menu:\r\nhttps://github.com/huggingface/datasets-tagging\r\n\r\nThe process is described toward the end of the [step-by-step guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card), do you have any suggestions for making it easier to find?\r\n\r\nOtherwise, the dataset card is really cool, thanks for making it so complete!\r\n",
"@yjernite\r\n\r\nThanks, YAML tags added. I think my main issue was with the flow of the [step-by-step guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). For example, the [card creator](https://huggingface.co/datasets/card-creator/) is introduced in Step 4, right after creating an empty directory for your dataset. The first field it requires are the YAML tags, which (at least for me) was the last step of the process.\r\n\r\nI'd suggest having the guide structured in the same order as the creation process. For me it was something like:\r\n- Step 1: Preparing your env\r\n- Step 2: Write the loading/processing code\r\n- Step 3: Automatically generate dummy data and `dataset_infos.json`\r\n- Step 4: Tag the dataset\r\n- Step 5: Write the dataset card using the [card creator](https://huggingface.co/datasets/card-creator/)\r\n- Step 6: Open a Pull Request on the main HuggingFace repo and share your work!!\r\n\r\nThanks again!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5817 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5817/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5817/comments | https://api.github.com/repos/huggingface/datasets/issues/5817/events | https://github.com/huggingface/datasets/issues/5817 | 1,694,891,866 | I_kwDODunzps5lBf9a | 5,817 | Setting `num_proc` errors when `.map` returns additional items. | [] | closed | false | null | 3 | 2023-05-03T21:46:53Z | 2023-05-04T21:14:21Z | 2023-05-04T20:22:25Z | null | ### Describe the bug
I'm using a map function that returns more rows than are passed in.
If I try to use `num_proc` I get:
```
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 563, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 528, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3097, in map
for rank, done, content in iflatmap_unordered(
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1372, in iflatmap_unordered
yield queue.get(timeout=0.05)
File "<string>", line 2, in get
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/managers.py", line 818, in _callmethod
kind, result = conn.recv()
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 258, in recv
buf = self._recv_bytes()
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 422, in _recv_bytes
buf = self._recv(4)
File "/home/davidg/.virtualenvs/learning/lib/python3.10/site-packages/multiprocess/connection.py", line 391, in _recv
raise EOFError
EOFError
```
### Steps to reproduce the bug
This is copied from the [Datasets docs](https://huggingface.co/docs/datasets/v2.12.0/en/process#batch-processing), with `num_proc` added, and will error.
```py
import datasets
dataset = ... # any old dataset
def chunk_examples(examples):
chunks = []
for sentence in examples["text"]:
chunks += [sentence[i : i + 50] for i in range(0, len(sentence), 50)]
return {"chunks": chunks}
chunked_dataset = dataset.map(
chunk_examples,
batched=True,
remove_columns=dataset.column_names,
num_proc=2, # Remove and it works
)
```
### Expected behavior
Should work fine. On a related note, multi-processing also fails if there is a Meta class anywhere in scope (and there are plenty in the standard library). This is the fault of `dill` and is a long standing issue.
Have you considered using Loky for multiprocessing? I've found that the built-in `datasets` multi-processing breaks more than it works so have written my own function using `loky`, for reference:
```py
import datasets
import loky
def fast_loop(dataset: datasets.Dataset, func, num_proc=None):
if num_proc is None:
import os
num_proc = len(os.sched_getaffinity(0))
shards = [
dataset.shard(num_shards=num_proc, index=i, contiguous=True)
for i in range(num_proc)
]
executor = loky.get_reusable_executor(max_workers=num_proc)
results = executor.map(func, shards)
return datasets.combine.concatenate_datasets(list(results))
```
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 2.0.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5817/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5817/timeline | null | completed | null | null | false | [
"Hi ! Unfortunately I couldn't reproduce on my side locally and with datasets 2.11 and python 3.10.11 on colab.\r\nWhat version of `multiprocess` are you using ?",
"I've got `multiprocess` version `0.70.14`.\r\n\r\nI've done some more testing and the error only occurs in PyCharm's Python Console. It seems to be [this PyCharm bug](https://youtrack.jetbrains.com/issue/PY-51922/Multiprocessing-bug.-Can-only-run-in-debugger.), I'll close this.",
"For other users facing this, my workaround is to conditionally set `num_proc` so I can work interactively in the PyCharm Python Console while developing, then when I'm ready to run on the whole dataset, run it as a script and use multiprocessing.\r\n\r\n```py\r\nmapped_ds = ds.map(\r\n my_map_function,\r\n batched=True,\r\n remove_columns=ds.column_names,\r\n num_proc=1 if \"PYCHARM_HOSTED\" in os.environ else 8,\r\n)\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/4300 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4300/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4300/comments | https://api.github.com/repos/huggingface/datasets/issues/4300/events | https://github.com/huggingface/datasets/pull/4300 | 1,230,272,761 | PR_kwDODunzps43iA86 | 4,300 | Add API code examples for loading methods | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-05-09T21:30:26Z | 2022-05-25T16:23:15Z | 2022-05-25T09:20:13Z | null | This PR adds API code examples for loading methods, let me know if I've missed any important parameters we should showcase :)
I was a bit confused about `inspect_dataset` and `inspect_metric`. The `path` parameter says it will accept a dataset identifier from the Hub. But when I try the identifier `rotten_tomatoes`, it gives me:
```py
from datasets import inspect_dataset
inspect_dataset('rotten_tomatoes', local_path='/content/rotten_tomatoes')
FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory.
```
Does the user need to have an existing copy of `rotten_tomatoes.py` on their local drive (in which case, it seems like the same option as the first option in `path`)? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4300/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4300/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4300.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4300",
"merged_at": "2022-05-25T09:20:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4300.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4300"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/6035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6035/comments | https://api.github.com/repos/huggingface/datasets/issues/6035/events | https://github.com/huggingface/datasets/pull/6035 | 1,805,087,687 | PR_kwDODunzps5Vh_QR | 6,035 | Dataset representation | [] | open | false | null | 1 | 2023-07-14T15:42:37Z | 2023-07-19T19:41:35Z | null | null | __repr__ and _repr_html_ now both are similar to that of Polars | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6035/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6035/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6035.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6035",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6035.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6035"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6035). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/1739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1739/comments | https://api.github.com/repos/huggingface/datasets/issues/1739/events | https://github.com/huggingface/datasets/pull/1739 | 787,219,138 | MDExOlB1bGxSZXF1ZXN0NTU1OTY5Njgx | 1,739 | fixes and improvements for the WebNLG loader | [] | closed | false | null | 5 | 2021-01-15T21:45:23Z | 2021-01-29T14:34:06Z | 2021-01-29T10:53:03Z | null | - fixes test sets loading in v3.0
- adds additional fields for v3.0_ru
- adds info to the WebNLG data card | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1739/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1739/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1739.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1739",
"merged_at": "2021-01-29T10:53:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1739.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1739"
} | true | [
"The dataset card is fantastic!\r\n\r\nLooks good to me! Did you check that this still passes the slow tests with the existing dummy data?",
"Yes, I ran and passed all the tests specified in [this guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#automatically-add-code-metadata), including the slow ones.",
"I just added the `from pathlib import Path` at the top to fix the script",
"I ran the tests locally and they all pass, merging",
"Thank you for the review!"
] |
https://api.github.com/repos/huggingface/datasets/issues/1316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1316/comments | https://api.github.com/repos/huggingface/datasets/issues/1316/events | https://github.com/huggingface/datasets/pull/1316 | 759,549,601 | MDExOlB1bGxSZXF1ZXN0NTM0NTM2Mzc1 | 1,316 | Allow GitHub releases as dataset source | [] | closed | false | null | 0 | 2020-12-08T15:39:35Z | 2020-12-10T10:12:00Z | 2020-12-10T10:12:00Z | null | # Summary
Providing a GitHub release URL to `DownloadManager.download()` currently throws a `ConnectionError: Couldn't reach [DOWNLOAD_URL]`. This PR fixes this problem by adding an exception for GitHub releases in `datasets.utils.file_utils.get_from_cache()`.
# Reproduce
```
import datasets
url = 'http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz'
result = datasets.utils.file_utils.get_from_cache(url)
# Returns: ConnectionError: Couldn't reach http://github.com/benjaminvdb/DBRD/releases/download/v3.0/DBRD_v3.tgz
```
# Cause
GitHub releases returns a HTTP status 403 (FOUND), indicating that the request is being redirected (to AWS S3, in this case). `get_from_cache()` checks whether the status is 200 (OK) or if it is part of two exceptions (Google Drive or Firebase), otherwise the mentioned error is thrown.
# Solution
Just like the exceptions for Google Drive and Firebase, add a condition for GitHub releases URLs that return the HTTP status 403. If this is the case, continue normally. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1316/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1316/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1316.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1316",
"merged_at": "2020-12-10T10:12:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1316.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1316"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1038/comments | https://api.github.com/repos/huggingface/datasets/issues/1038/events | https://github.com/huggingface/datasets/pull/1038 | 755,987,997 | MDExOlB1bGxSZXF1ZXN0NTMxNjA2Njgw | 1,038 | add med_hop | [] | closed | false | null | 0 | 2020-12-03T08:40:27Z | 2020-12-03T16:53:13Z | 2020-12-03T16:52:23Z | null | This PR adds the MedHop dataset from the QAngaroo multi hop reading comprehension datasets
More info:
http://qangaroo.cs.ucl.ac.uk/index.html | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1038/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1038/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1038.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1038",
"merged_at": "2020-12-03T16:52:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1038.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1038"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3598 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3598/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3598/comments | https://api.github.com/repos/huggingface/datasets/issues/3598/events | https://github.com/huggingface/datasets/issues/3598 | 1,108,107,199 | I_kwDODunzps5CDF-_ | 3,598 | Readme info not being parsed to show on Dataset card page | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2022-01-19T13:32:29Z | 2022-01-21T10:20:01Z | 2022-01-21T10:20:01Z | null | ## Describe the bug
The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README.
## Steps to reproduce the bug
# Sample code to reproduce the bug
The README file is this one: https://huggingface.co/datasets/softcatala/Tilde-MODEL-Catalan/blob/main/README.md
## Expected results
README info should appear in the Dataset card page.
## Actual results
Nothing is shown. However, labels are parsed and shown successfully.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3598/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3598/timeline | null | completed | null | null | false | [
"i suspect a markdown parsing error, @severo do you want to take a quick look at it when you have some time?",
"# Problem\r\nThe issue seems to coming from the front matter of the README\r\n```---\r\nannotations_creators:\r\n- no-annotation\r\nlanguage_creators:\r\n- machine-generated\r\nlanguages:\r\n- 'ca'\r\n- 'de'\r\nlicenses:\r\n- cc-by-4.0\r\nmultilinguality:\r\n- translation\r\npretty_name: Catalan-German aligned corpora to train NMT systems.\r\nsize_categories:\r\n- \"1M<n<10M\" \r\nsource_datasets:\r\n- extended|tilde_model\r\ntask_categories:\r\n- machine-translation\r\ntask_ids:\r\n- machine-translation\r\n---\r\n``` \r\n# Solution\r\nThe fix is to correctly style the README as explained [here](https://huggingface.co/docs/datasets/v1.12.0/dataset_card.html). I have also correctly parsed the font matter as shown below:\r\n```\r\n---\r\nannotations_creators: []\r\nlanguage_creators: [machine-generated]\r\nlanguages: ['ca', 'de']\r\nlicenses: []\r\nmultilinguality:\r\n- multilingual\r\npretty_name: 'Catalan-German aligned corpora to train NMT systems.'\r\nsize_categories: \r\n- 1M<n<10M\r\nsource_datasets: ['extended|tilde_model']\r\ntask_categories: ['machine-translation']\r\ntask_ids: ['machine-translation']\r\n---\r\n```\r\nYou can find the README for a sample dataset [here](https://huggingface.co/datasets/ritwikraha/Test)",
"Thank you. It finally worked implementing your changes and leaving a white line between title and text in the description.",
"Thanks, if this solves your issue, can you please close it?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2931/comments | https://api.github.com/repos/huggingface/datasets/issues/2931/events | https://github.com/huggingface/datasets/pull/2931 | 998,326,359 | PR_kwDODunzps4r1-JH | 2,931 | Fix bug in to_tf_dataset | [] | closed | false | null | 1 | 2021-09-16T15:08:03Z | 2021-09-16T17:01:38Z | 2021-09-16T17:01:37Z | null | Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2931/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2931.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2931",
"merged_at": "2021-09-16T17:01:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2931.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2931"
} | true | [
"I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway!"
] |
https://api.github.com/repos/huggingface/datasets/issues/1620 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1620/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1620/comments | https://api.github.com/repos/huggingface/datasets/issues/1620/events | https://github.com/huggingface/datasets/pull/1620 | 772,620,056 | MDExOlB1bGxSZXF1ZXN0NTQzODUxNTY3 | 1,620 | Adding myPOS2017 dataset | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 4 | 2020-12-22T04:04:55Z | 2022-10-03T09:38:23Z | 2022-10-03T09:38:23Z | null | myPOS Corpus (Myanmar Part-of-Speech Corpus) for Myanmar language NLP Research and Developments | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1620/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1620/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1620.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1620",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1620.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1620"
} | true | [
"I've updated the code and Readme to reflect your comments.\r\nThank you very much,",
"looks like this PR includes changes about many other files than the ones for myPOS2017\r\n\r\nCould you open another branch and another PR please ?\r\n(or fix this branch)",
"Hi @hungluumfc ! Have you had a chance to fix this PR so that it only includes the changes for `mypos` ? \r\n\r\nFeel free to ping me if you have questions or if I can help :) ",
"Thanks for your contribution, @hungluumfc. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if you need some help."
] |
https://api.github.com/repos/huggingface/datasets/issues/3074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3074/comments | https://api.github.com/repos/huggingface/datasets/issues/3074/events | https://github.com/huggingface/datasets/pull/3074 | 1,025,940,085 | PR_kwDODunzps4tLbe- | 3,074 | add XCSR dataset | [] | closed | false | null | 2 | 2021-10-14T04:39:59Z | 2021-11-08T13:52:36Z | 2021-11-08T13:52:36Z | null | Hi,
I wanted to add the [XCSR ](https://inklab.usc.edu//XCSR/xcsr_datasets) dataset to huggingface! :)
I followed the instructions of adding new dataset to huggingface and have all the required files ready now! It would be super helpful if you can take a look and review them. Thanks in advance for your time and help. Look forward to hearing from you and can't wait to add XCSR to huggingface :D | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3074/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3074/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3074.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3074",
"merged_at": "2021-11-08T13:52:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3074.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3074"
} | true | [
"> Hi ! Thanks for adding this dataset :)\r\n> \r\n> Do you know how the translations were done ? Maybe we can mention that in the dataset card.\r\n> \r\n> The rest looks all good to me :) good job with the dataset script and the dataset card !\r\n> \r\n> Just one thing: we try to have dummy_data.zip files that are as small as possible, however here each zip file is 70KB+. It think we can make them even smaller if we remove unnecessary files in them. In particular in the `ar` dummy data zip file, we don't need the data for all languages, but rather only the `ar` files. Could you try to remove the unnecessary files in the dummy data zip files ?\r\n\r\nHi! \r\n\r\nThank you so much for reviewing this PR. I've updated the README to briefly mention the translations and added a link to the paper, where a detailed description of the translation procedure can be found in the appendix.\r\n\r\nFor the dummy_data.zip files, is it possible to keep all the current files? I tried to remove some of the files, but the removal led to a failure in the local testing. We also think it may be better to keep the current dummy_data.zip files because all the data are useful actually. Thanks a lot!!",
"Hi @lhoestq, just a gentle ping on this PR. :D "
] |
https://api.github.com/repos/huggingface/datasets/issues/1522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1522/comments | https://api.github.com/repos/huggingface/datasets/issues/1522/events | https://github.com/huggingface/datasets/pull/1522 | 764,341,594 | MDExOlB1bGxSZXF1ZXN0NTM4NDUzNjg4 | 1,522 | Add semeval 2020 task 11 | [] | closed | false | null | 2 | 2020-12-12T20:32:14Z | 2020-12-15T16:48:52Z | 2020-12-15T16:48:52Z | null | Adding in propaganda detection task (task 11) from Sem Eval 2020 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1522/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1522/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1522.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1522",
"merged_at": "2020-12-15T16:48:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1522.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1522"
} | true | [
"@SBrandeis : Thanks for the feedback! Just updated to use context manager for the `open`s and removed the placeholder text from the `README`!",
"Great, thanks @ZacharySBrown !\r\nFailing tests seem to be unrelated to your changes, merging the current master branch into yours should fix them.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3890/comments | https://api.github.com/repos/huggingface/datasets/issues/3890/events | https://github.com/huggingface/datasets/pull/3890 | 1,165,502,838 | PR_kwDODunzps40QJ8V | 3,890 | Update beans download urls | [] | closed | false | null | 2 | 2022-03-10T17:16:16Z | 2022-03-15T16:47:30Z | 2022-03-15T15:26:48Z | null | Replace the old URLs with the Hub [URLs](https://huggingface.co/datasets/beans/tree/main/data).
Also reported by @stevhliu.
Fix #3889 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3890/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3890/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3890.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3890",
"merged_at": "2022-03-15T15:26:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3890.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3890"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3890). All of your documentation changes will be reflected on that endpoint.",
"@albertvillanova Thanks for investigating and fixing that issue. I regenerated the `dataset_infos.json` file."
] |
https://api.github.com/repos/huggingface/datasets/issues/3102 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3102/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3102/comments | https://api.github.com/repos/huggingface/datasets/issues/3102/events | https://github.com/huggingface/datasets/issues/3102 | 1,029,067,062 | I_kwDODunzps49VlE2 | 3,102 | Unsuitable project description in PyPI | [] | closed | false | null | 0 | 2021-10-18T12:45:00Z | 2021-10-18T12:59:56Z | 2021-10-18T12:59:56Z | null | Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3102/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3102/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/908/comments | https://api.github.com/repos/huggingface/datasets/issues/908/events | https://github.com/huggingface/datasets/pull/908 | 752,428,652 | MDExOlB1bGxSZXF1ZXN0NTI4NzUzMjcz | 908 | Add dependency on black for tests | [] | closed | false | null | 1 | 2020-11-27T19:12:48Z | 2020-11-27T21:46:53Z | 2020-11-27T21:46:52Z | null | Add package 'black' as an installation requirement for tests. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/908/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/908/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/908.diff",
"html_url": "https://github.com/huggingface/datasets/pull/908",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/908.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/908"
} | true | [
"Sorry, I have just seen that it was already in `QUALITY_REQUIRE`.\r\n\r\nFor some reason it did not get installed on my virtual environment..."
] |
https://api.github.com/repos/huggingface/datasets/issues/392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/392/comments | https://api.github.com/repos/huggingface/datasets/issues/392/events | https://github.com/huggingface/datasets/pull/392 | 657,313,738 | MDExOlB1bGxSZXF1ZXN0NDQ5NDUwOTkx | 392 | Style change detection | [] | closed | false | null | 0 | 2020-07-15T12:32:14Z | 2020-07-21T13:18:36Z | 2020-07-17T17:13:23Z | null | Another [PAN task](https://pan.webis.de/clef20/pan20-web/style-change-detection.html). This time about identifying when the style/author changes in documents.
- There's the possibility of adding the [PAN19](https://zenodo.org/record/3577602) and PAN18 style change detection tasks too (these are datasets whose labels are a subset of PAN20's). These would probably make more sense as separate datasets (like wmt is now)
- I've converted the integer 0,1 values to a boolean
- Using manually downloaded data again. This might be changed at some point following the discussion in https://github.com/huggingface/nlp/pull/349. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/392/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/392/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/392.diff",
"html_url": "https://github.com/huggingface/datasets/pull/392",
"merged_at": "2020-07-17T17:13:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/392.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/392"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3109/comments | https://api.github.com/repos/huggingface/datasets/issues/3109/events | https://github.com/huggingface/datasets/pull/3109 | 1,030,543,284 | PR_kwDODunzps4tZXmC | 3,109 | Update BibTeX entry | [] | closed | false | null | 0 | 2021-10-19T16:59:31Z | 2021-10-19T17:13:28Z | 2021-10-19T17:13:27Z | null | Update BibTeX entry. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3109/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3109/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3109.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3109",
"merged_at": "2021-10-19T17:13:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3109.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3109"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1860/comments | https://api.github.com/repos/huggingface/datasets/issues/1860/events | https://github.com/huggingface/datasets/pull/1860 | 805,510,037 | MDExOlB1bGxSZXF1ZXN0NTcxMDk4OTIz | 1,860 | Add loading from the Datasets Hub + add relative paths in download manager | [] | closed | false | null | 2 | 2021-02-10T13:24:11Z | 2021-02-12T19:13:30Z | 2021-02-12T19:13:29Z | null | With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data.
For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files.
You can load it using
```python
from datasets import load_dataset
d = load_dataset("lhoestq/custom_squad")
```
To be able to use the data files that live right next to the dataset script on the repo in the hub, I added relative paths support for the DownloadManager. For example in the repo mentioned above, there are two json files that can be downloaded via
```python
_URLS = {
"train": "train-v1.1.json",
"dev": "dev-v1.1.json",
}
downloaded_files = dl_manager.download_and_extract(_URLS)
```
To make it work, I set the `base_path` of the DownloadManager to be the parent path of the dataset script (which comes from either a local path or a remote url).
I also had to add the auth header of the requests to huggingface.co for private datasets repos. The token is fetched from [huggingface_hub](https://github.com/huggingface/huggingface_hub). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1860/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1860/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1860.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1860",
"merged_at": "2021-02-12T19:13:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1860.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1860"
} | true | [
"I just added the steps to share a dataset on the datasets hub. It's highly inspired by the steps to share a model in the `transformers` doc.\r\n\r\nMoreover once the new huggingface_hub is released we can update the version in the setup.py. We also need to update the command to create a dataset repo in the documentation\r\n\r\nI added a few more tests with the \"lhoestq/test\" dataset I added on the hub and it works fine :) ",
"Here is the PR adding support for datasets repos in `huggingface_hub`: https://github.com/huggingface/huggingface_hub/pull/14"
] |
https://api.github.com/repos/huggingface/datasets/issues/3853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3853/comments | https://api.github.com/repos/huggingface/datasets/issues/3853/events | https://github.com/huggingface/datasets/pull/3853 | 1,162,386,592 | PR_kwDODunzps40F3uN | 3,853 | add ontonotes_conll dataset | [] | closed | false | null | 2 | 2022-03-08T08:53:42Z | 2022-03-15T10:48:02Z | 2022-03-15T10:48:02Z | null | # Introduction of the dataset
OntoNotes v5.0 is the final version of OntoNotes corpus, and is a large-scale, multi-genre,
multilingual corpus manually annotated with syntactic, semantic and discourse information.
This dataset is the version of OntoNotes v5.0 extended and used in the CoNLL-2012 shared task
, includes v4 train/dev and v9 test data for English/Chinese/Arabic and corrected version v12 train/dev/test data (English only).
This dataset is widely used in name entity recognition, coreference resolution, and semantic role labeling.
In dataset loading script, I modify and use the code of [AllenNLP/Ontonotes](https://docs.allennlp.org/models/main/models/common/ontonotes/#ontonotes) to read the special conll files while don't get extra package dependency.
# Some workarounds I did
1. task ids
I add tasks that I can't find anywhere `semantic-role-labeling`, `lemmatization`, and `word-sense-disambiguation` to the task category `structure-prediction`, because they are related to "syntax". I feel there is another good name for the task category since some tasks mentioned aren't related to structure, but I have no good idea.
2. `dl_manage.extract`
Since we'll get another zip after unzip the downloaded zip data, I have to use `dl_manager.extract` directly inside `_generate_examples`. But when testing dummy data, `dl_manager.extract` do nothing. So I make a conditional such that it manually extract data when testing dummy data.
# Help
Don't know how to fix the building doc error. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3853/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3853/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3853.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3853",
"merged_at": "2022-03-15T10:48:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3853.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3853"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3853). All of your documentation changes will be reflected on that endpoint.",
"The CI fail is unrelated to this dataset, merging :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2597/comments | https://api.github.com/repos/huggingface/datasets/issues/2597/events | https://github.com/huggingface/datasets/pull/2597 | 937,917,770 | MDExOlB1bGxSZXF1ZXN0Njg0Mzk0MDIz | 2,597 | Remove redundant prepare_module | [
{
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior",
"id": 2851292821,
"name": "refactoring",
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring"
}
] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 0 | 2021-07-06T13:47:45Z | 2021-07-12T14:10:52Z | 2021-07-07T13:01:46Z | null | I have noticed that after implementing `load_dataset_builder` (#2500), there is a redundant call to `prepare_module`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2597/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2597/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2597.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2597",
"merged_at": "2021-07-07T13:01:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2597.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2597"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/305 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/305/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/305/comments | https://api.github.com/repos/huggingface/datasets/issues/305/events | https://github.com/huggingface/datasets/issues/305 | 644,148,149 | MDU6SXNzdWU2NDQxNDgxNDk= | 305 | Importing downloaded package repository fails | [
{
"color": "25b21e",
"default": false,
"description": "A bug in a metric script",
"id": 2067393914,
"name": "metric bug",
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug"
}
] | closed | false | null | 0 | 2020-06-23T21:09:05Z | 2020-07-30T16:44:23Z | 2020-07-30T16:44:23Z | null | The `get_imports` function in `src/nlp/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics/coval.py` file, and would be useful to add BLEURT (@ankparikh).
Currently however, the code seems to have trouble with imports within the package. For example:
```
import nlp
coval = nlp.load_metric('coval')
```
yields:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/yacine/Code/nlp/src/nlp/load.py", line 432, in load_metric
metric_cls = import_main_class(module_path, dataset=False)
File "/home/yacine/Code/nlp/src/nlp/load.py", line 57, in import_main_class
module = importlib.import_module(module_path)
File "/home/yacine/anaconda3/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval.py", line 21, in <module>
from .coval_backend.conll import reader # From: https://github.com/ns-moosavi/coval
File "/home/yacine/Code/nlp/src/nlp/metrics/coval/a78807df33ac45edbb71799caf2b3b47e55df4fd690267808fe963a5e8b30952/coval_backend/conll/reader.py", line 2, in <module>
from conll import mention
ModuleNotFoundError: No module named 'conll'
```
Not sure what the fix would be there. | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/305/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/305/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4272/comments | https://api.github.com/repos/huggingface/datasets/issues/4272/events | https://github.com/huggingface/datasets/pull/4272 | 1,224,635,660 | PR_kwDODunzps43QQQt | 4,272 | Fix typo in logging docs | [] | closed | false | null | 4 | 2022-05-03T20:47:57Z | 2022-05-04T15:42:27Z | 2022-05-04T06:58:36Z | null | This PR fixes #4271. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4272/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4272/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4272.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4272",
"merged_at": "2022-05-04T06:58:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4272.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4272"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> This PR fixes #4271.\r\n\r\nThings have not changed when searching \"tqdm\" in the Dataset document. The second result still performs as \"Enable\".",
"Hi @jiangwy99, the fix will appear on the `main` version of the docs:\r\n\r\n\r\n",
"Fixed now, thanks."
] |
https://api.github.com/repos/huggingface/datasets/issues/45 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/45/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/45/comments | https://api.github.com/repos/huggingface/datasets/issues/45/events | https://github.com/huggingface/datasets/pull/45 | 612,386,583 | MDExOlB1bGxSZXF1ZXN0NDEzMzQzMjAy | 45 | [Load] Separate Module kwargs and builder kwargs. | [] | closed | false | null | 0 | 2020-05-05T07:09:54Z | 2022-10-04T09:32:11Z | 2020-05-08T09:51:22Z | null | Kwargs for the `load_module` fn should be passed with `module_xxxx` to `builder_kwargs` of `load` fn.
This is a follow-up PR of: https://github.com/huggingface/nlp/pull/41 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/45/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/45/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/45.diff",
"html_url": "https://github.com/huggingface/datasets/pull/45",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/45.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/45"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6083/comments | https://api.github.com/repos/huggingface/datasets/issues/6083/events | https://github.com/huggingface/datasets/pull/6083 | 1,824,832,348 | PR_kwDODunzps5WkgAI | 6,083 | set dev version | [] | closed | false | null | 3 | 2023-07-27T17:10:41Z | 2023-07-27T17:22:05Z | 2023-07-27T17:11:01Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6083/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6083/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6083.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6083",
"merged_at": "2023-07-27T17:11:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6083.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6083"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6083). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006049 / 0.011353 (-0.005304) | 0.003698 / 0.011008 (-0.007310) | 0.080614 / 0.038508 (0.042106) | 0.060955 / 0.023109 (0.037846) | 0.337119 / 0.275898 (0.061221) | 0.369544 / 0.323480 (0.046064) | 0.004681 / 0.007986 (-0.003305) | 0.002892 / 0.004328 (-0.001436) | 0.062907 / 0.004250 (0.058657) | 0.049235 / 0.037052 (0.012183) | 0.338842 / 0.258489 (0.080353) | 0.371172 / 0.293841 (0.077331) | 0.027016 / 0.128546 (-0.101530) | 0.007940 / 0.075646 (-0.067706) | 0.260902 / 0.419271 (-0.158369) | 0.044566 / 0.043533 (0.001034) | 0.342354 / 0.255139 (0.087215) | 0.359829 / 0.283200 (0.076629) | 0.020801 / 0.141683 (-0.120881) | 1.444111 / 1.452155 (-0.008044) | 1.515595 / 1.492716 (0.022879) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.183446 / 0.018006 (0.165439) | 0.437071 / 0.000490 (0.436581) | 0.003124 / 0.000200 (0.002924) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023760 / 0.037411 (-0.013651) | 0.072812 / 0.014526 (0.058286) | 0.082790 / 0.176557 (-0.093766) | 0.146330 / 0.737135 (-0.590805) | 0.084469 / 0.296338 (-0.211870) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.395215 / 0.215209 (0.180006) | 3.953023 / 2.077655 (1.875369) | 1.914268 / 1.504120 (0.410148) | 1.710195 / 1.541195 (0.169001) | 1.782594 / 1.468490 (0.314104) | 0.503651 / 4.584777 (-4.081126) | 3.039656 / 3.745712 (-0.706056) | 4.364691 / 5.269862 (-0.905171) | 2.597762 / 4.565676 (-1.967915) | 0.057384 / 0.424275 (-0.366891) | 0.006419 / 0.007607 (-0.001188) | 0.467214 / 0.226044 (0.241169) | 4.661425 / 2.268929 (2.392497) | 2.341957 / 55.444624 (-53.102667) | 1.977598 / 6.876477 (-4.898878) | 2.178005 / 2.142072 (0.035933) | 0.588492 / 4.805227 (-4.216735) | 0.124972 / 6.500664 (-6.375692) | 0.060902 / 0.075469 (-0.014567) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.243092 / 1.841788 (-0.598695) | 18.369971 / 8.074308 (10.295663) | 13.939700 / 10.191392 (3.748308) | 0.149275 / 0.680424 (-0.531149) | 0.016873 / 0.534201 (-0.517328) | 0.334245 / 0.579283 (-0.245038) | 0.353832 / 0.434364 (-0.080532) | 0.382720 / 0.540337 (-0.157617) | 0.534634 / 1.386936 (-0.852302) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005933 / 0.011353 (-0.005420) | 0.003695 / 0.011008 (-0.007313) | 0.063457 / 0.038508 (0.024949) | 0.062347 / 0.023109 (0.039238) | 0.412370 / 0.275898 (0.136472) | 0.450399 / 0.323480 (0.126920) | 0.004627 / 0.007986 (-0.003358) | 0.002822 / 0.004328 (-0.001507) | 0.063819 / 0.004250 (0.059569) | 0.049154 / 0.037052 (0.012101) | 0.428196 / 0.258489 (0.169707) | 0.464109 / 0.293841 (0.170268) | 0.026967 / 0.128546 (-0.101579) | 0.007876 / 0.075646 (-0.067770) | 0.068479 / 0.419271 (-0.350793) | 0.041080 / 0.043533 (-0.002453) | 0.399817 / 0.255139 (0.144678) | 0.426900 / 0.283200 (0.143701) | 0.019931 / 0.141683 (-0.121752) | 1.461642 / 1.452155 (0.009487) | 1.529314 / 1.492716 (0.036598) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230256 / 0.018006 (0.212249) | 0.423442 / 0.000490 (0.422952) | 0.002492 / 0.000200 (0.002292) | 0.000072 / 0.000054 (0.000018) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025798 / 0.037411 (-0.011613) | 0.077361 / 0.014526 (0.062836) | 0.088454 / 0.176557 (-0.088102) | 0.142137 / 0.737135 (-0.594998) | 0.088213 / 0.296338 (-0.208125) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417656 / 0.215209 (0.202447) | 4.157095 / 2.077655 (2.079440) | 2.132863 / 1.504120 (0.628743) | 1.967220 / 1.541195 (0.426025) | 2.020505 / 1.468490 (0.552015) | 0.496835 / 4.584777 (-4.087942) | 2.989251 / 3.745712 (-0.756462) | 2.849315 / 5.269862 (-2.420546) | 1.848941 / 4.565676 (-2.716736) | 0.057307 / 0.424275 (-0.366968) | 0.006825 / 0.007607 (-0.000782) | 0.489103 / 0.226044 (0.263059) | 4.904776 / 2.268929 (2.635847) | 2.593914 / 55.444624 (-52.850710) | 2.253384 / 6.876477 (-4.623093) | 2.426384 / 2.142072 (0.284312) | 0.592467 / 4.805227 (-4.212760) | 0.126122 / 6.500664 (-6.374542) | 0.063160 / 0.075469 (-0.012309) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.313020 / 1.841788 (-0.528768) | 18.343984 / 8.074308 (10.269676) | 13.763060 / 10.191392 (3.571668) | 0.146312 / 0.680424 (-0.534111) | 0.016980 / 0.534201 (-0.517221) | 0.339572 / 0.579283 (-0.239711) | 0.351310 / 0.434364 (-0.083054) | 0.397616 / 0.540337 (-0.142721) | 0.536879 / 1.386936 (-0.850057) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009979 / 0.011353 (-0.001374) | 0.005024 / 0.011008 (-0.005984) | 0.096566 / 0.038508 (0.058058) | 0.081181 / 0.023109 (0.058072) | 0.398415 / 0.275898 (0.122517) | 0.513971 / 0.323480 (0.190491) | 0.006716 / 0.007986 (-0.001269) | 0.004350 / 0.004328 (0.000022) | 0.071418 / 0.004250 (0.067168) | 0.065002 / 0.037052 (0.027949) | 0.424791 / 0.258489 (0.166302) | 0.442369 / 0.293841 (0.148528) | 0.054540 / 0.128546 (-0.074007) | 0.014067 / 0.075646 (-0.061580) | 0.368930 / 0.419271 (-0.050341) | 0.082468 / 0.043533 (0.038935) | 0.419875 / 0.255139 (0.164736) | 0.508308 / 0.283200 (0.225108) | 0.050411 / 0.141683 (-0.091272) | 1.582271 / 1.452155 (0.130116) | 1.842033 / 1.492716 (0.349317) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290427 / 0.018006 (0.272420) | 0.594736 / 0.000490 (0.594246) | 0.007058 / 0.000200 (0.006858) | 0.000149 / 0.000054 (0.000095) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027085 / 0.037411 (-0.010326) | 0.087626 / 0.014526 (0.073101) | 0.094299 / 0.176557 (-0.082257) | 0.160169 / 0.737135 (-0.576966) | 0.101474 / 0.296338 (-0.194864) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.545845 / 0.215209 (0.330636) | 5.674389 / 2.077655 (3.596734) | 2.489065 / 1.504120 (0.984945) | 2.166674 / 1.541195 (0.625479) | 2.166925 / 1.468490 (0.698434) | 0.791244 / 4.584777 (-3.793533) | 4.944878 / 3.745712 (1.199165) | 4.121628 / 5.269862 (-1.148234) | 2.701262 / 4.565676 (-1.864415) | 0.087609 / 0.424275 (-0.336666) | 0.006945 / 0.007607 (-0.000662) | 0.668478 / 0.226044 (0.442434) | 6.552813 / 2.268929 (4.283885) | 3.164698 / 55.444624 (-52.279927) | 2.447333 / 6.876477 (-4.429144) | 2.608271 / 2.142072 (0.466198) | 0.954202 / 4.805227 (-3.851025) | 0.187730 / 6.500664 (-6.312934) | 0.063229 / 0.075469 (-0.012240) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.461042 / 1.841788 (-0.380746) | 21.601409 / 8.074308 (13.527101) | 18.553604 / 10.191392 (8.362212) | 0.234571 / 0.680424 (-0.445853) | 0.027119 / 0.534201 (-0.507082) | 0.423448 / 0.579283 (-0.155835) | 0.556397 / 0.434364 (0.122033) | 0.493958 / 0.540337 (-0.046379) | 0.711345 / 1.386936 (-0.675591) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008637 / 0.011353 (-0.002716) | 0.014450 / 0.011008 (0.003442) | 0.084135 / 0.038508 (0.045627) | 0.080513 / 0.023109 (0.057403) | 0.557941 / 0.275898 (0.282042) | 0.563199 / 0.323480 (0.239719) | 0.006475 / 0.007986 (-0.001510) | 0.004407 / 0.004328 (0.000078) | 0.088537 / 0.004250 (0.084287) | 0.060871 / 0.037052 (0.023819) | 0.593077 / 0.258489 (0.334588) | 0.615572 / 0.293841 (0.321732) | 0.050157 / 0.128546 (-0.078389) | 0.014313 / 0.075646 (-0.061333) | 0.091784 / 0.419271 (-0.327487) | 0.065649 / 0.043533 (0.022116) | 0.532569 / 0.255139 (0.277430) | 0.580775 / 0.283200 (0.297575) | 0.036434 / 0.141683 (-0.105249) | 2.080051 / 1.452155 (0.627896) | 1.907430 / 1.492716 (0.414713) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.297763 / 0.018006 (0.279757) | 0.670408 / 0.000490 (0.669918) | 0.000467 / 0.000200 (0.000267) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030297 / 0.037411 (-0.007114) | 0.100310 / 0.014526 (0.085784) | 0.113158 / 0.176557 (-0.063398) | 0.149599 / 0.737135 (-0.587536) | 0.102620 / 0.296338 (-0.193718) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616588 / 0.215209 (0.401379) | 6.572262 / 2.077655 (4.494608) | 2.830748 / 1.504120 (1.326628) | 2.478441 / 1.541195 (0.937246) | 2.573017 / 1.468490 (1.104527) | 0.844154 / 4.584777 (-3.740623) | 5.161625 / 3.745712 (1.415913) | 4.541114 / 5.269862 (-0.728748) | 2.907804 / 4.565676 (-1.657872) | 0.097044 / 0.424275 (-0.327231) | 0.008692 / 0.007607 (0.001085) | 0.806640 / 0.226044 (0.580595) | 7.620521 / 2.268929 (5.351593) | 3.587100 / 55.444624 (-51.857524) | 2.901319 / 6.876477 (-3.975157) | 3.091288 / 2.142072 (0.949215) | 1.056109 / 4.805227 (-3.749118) | 0.209860 / 6.500664 (-6.290804) | 0.079575 / 0.075469 (0.004106) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.966194 / 1.841788 (0.124407) | 28.040515 / 8.074308 (19.966207) | 25.848647 / 10.191392 (15.657255) | 0.255472 / 0.680424 (-0.424951) | 0.036154 / 0.534201 (-0.498046) | 0.515168 / 0.579283 (-0.064115) | 0.696092 / 0.434364 (0.261728) | 0.602712 / 0.540337 (0.062374) | 0.781091 / 1.386936 (-0.605845) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4510/comments | https://api.github.com/repos/huggingface/datasets/issues/4510/events | https://github.com/huggingface/datasets/pull/4510 | 1,273,260,396 | PR_kwDODunzps45wq6o | 4,510 | Add regression test for `ArrowWriter.write_batch` when batch is empty | [] | closed | false | null | 2 | 2022-06-16T08:53:51Z | 2022-06-16T12:38:02Z | 2022-06-16T12:28:19Z | null | As spotted by @cccntu in #4502, there's a logic bug in `ArrowWriter.write_batch` as the if-statement to handle the empty batches as detailed in the docstrings of the function ("Ignores the batch if it appears to be empty, preventing a potential schema update of unknown types."), the current if-statement is not handling properly `writer.write_batch({})` as an error is triggered.
Also, if we add a regression test in `test_arrow_writer.py::test_write_batch` before applying the fix, the test will fail as when trying to write an empty batch as follows:
```
=================================================================================== short test summary info ===================================================================================
FAILED tests/test_arrow_writer.py::test_write_batch[None-None] - ValueError: Schema and number of arrays unequal
FAILED tests/test_arrow_writer.py::test_write_batch[None-1] - ValueError: Schema and number of arrays unequal
FAILED tests/test_arrow_writer.py::test_write_batch[None-10] - ValueError: Schema and number of arrays unequal
FAILED tests/test_arrow_writer.py::test_write_batch[fields1-None] - ValueError: Schema and number of arrays unequal
FAILED tests/test_arrow_writer.py::test_write_batch[fields1-1] - ValueError: Schema and number of arrays unequal
FAILED tests/test_arrow_writer.py::test_write_batch[fields1-10] - ValueError: Schema and number of arrays unequal
FAILED tests/test_arrow_writer.py::test_write_batch[fields2-None] - ValueError: Schema and number of arrays unequal
FAILED tests/test_arrow_writer.py::test_write_batch[fields2-1] - ValueError: Schema and number of arrays unequal
FAILED tests/test_arrow_writer.py::test_write_batch[fields2-10] - ValueError: Schema and number of arrays unequal
======================================================================== 9 failed, 73 deselected, 7 warnings in 0.81s =========================================================================
```
So the batch is not ignored when empty, as `batch_examples={}` won't match the condition `if batch_examples: ...`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4510/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4510/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4510.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4510",
"merged_at": "2022-06-16T12:28:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4510.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4510"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"As mentioned by @lhoestq, the current behavior is correct and we should not expect batches with different columns, in that case, the if should fail, as the values of the batch can be empty, but not the actual `batch_examples` value."
] |
https://api.github.com/repos/huggingface/datasets/issues/3732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3732/comments | https://api.github.com/repos/huggingface/datasets/issues/3732/events | https://github.com/huggingface/datasets/pull/3732 | 1,140,004,022 | PR_kwDODunzps4y7PTU | 3,732 | Support streaming in size estimation function in `push_to_hub` | [] | closed | false | null | 2 | 2022-02-16T13:10:48Z | 2022-02-21T18:18:45Z | 2022-02-21T18:18:44Z | null | This PR adds the streamable version of `os.path.getsize` (`fsspec` can return `None`, so we fall back to `fs.open` to make it more robust) to account for possible streamable paths in the nested `extra_nbytes_visitor` function inside `push_to_hub`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3732/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3732/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3732.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3732",
"merged_at": "2022-02-21T18:18:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3732.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3732"
} | true | [
"would this allow to include the size in the dataset info without downloading the files? related to https://github.com/huggingface/datasets/pull/3670",
"@severo I don't think so. We could use this to get `info.download_checksums[\"num_bytes\"]`, but we must process the files to get the rest of the size info. "
] |
https://api.github.com/repos/huggingface/datasets/issues/4178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4178/comments | https://api.github.com/repos/huggingface/datasets/issues/4178/events | https://github.com/huggingface/datasets/pull/4178 | 1,207,787,073 | PR_kwDODunzps42ZfFN | 4,178 | [feat] Add ImageNet dataset | [] | closed | false | null | 3 | 2022-04-19T06:01:35Z | 2022-04-29T21:43:59Z | 2022-04-29T21:37:08Z | null | To use the dataset download the tar file
[imagenet_object_localization_patched2019.tar.gz](https://www.kaggle.com/competitions/imagenet-object-localization-challenge/data?select=imagenet_object_localization_patched2019.tar.gz) from Kaggle and then point the datasets library to it by using:
```py
from datasets import load_dataset
dataset = load_dataset("imagenet",
data_dir="/path/to/imagenet_object_localization_patched2019.tar.gz")
```
Currently train and validation splits are supported. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4178/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4178/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4178.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4178",
"merged_at": "2022-04-29T21:37:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4178.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4178"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the comments. I believe I have addressed all of them and also decreased the size of the dummy data file, so it should be ready for a re-review. I also made a change to allow adding synset mapping and valprep script in config in case we add ImageNet 21k some time later. ",
"@lhoestq I have updated the PR to address all of the review comments."
] |
https://api.github.com/repos/huggingface/datasets/issues/5508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5508/comments | https://api.github.com/repos/huggingface/datasets/issues/5508/events | https://github.com/huggingface/datasets/issues/5508 | 1,573,290,359 | I_kwDODunzps5dxoF3 | 5,508 | Saving a dataset after setting format to torch doesn't work, but only if filtering | [] | closed | false | null | 2 | 2023-02-06T21:08:58Z | 2023-02-09T14:55:26Z | 2023-02-09T14:55:26Z | null | ### Describe the bug
Saving a dataset after setting format to torch doesn't work, but only if filtering
### Steps to reproduce the bug
```
a = Dataset.from_dict({"b": [1, 2]})
a.set_format('torch')
a.save_to_disk("test_save") # saves successfully
a.filter(None).save_to_disk("test_save_filter") # does not
>> [...] TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types [<class 'torch.Tensor'>]. When using `batched=True`, make sure provided `function` returns a `dict` of types like `(<class 'list'>, <class 'numpy.ndarray'>)`.
# note: skipping the format change to torch lets this work.
### Expected behavior
Saving to work
### Environment info
- `datasets` version: 2.4.0
- Platform: Linux-6.1.9-arch1-1-x86_64-with-glibc2.36
- Python version: 3.10.9
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5508/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5508/timeline | null | completed | null | null | false | [
"Hey, I'm a research engineer working on language modelling wanting to contribute to open source. I was wondering if I could give it a shot?",
"Hi! This issue was fixed in https://github.com/huggingface/datasets/pull/4972, so please install `datasets>=2.5.0` to avoid it."
] |
https://api.github.com/repos/huggingface/datasets/issues/2180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2180/comments | https://api.github.com/repos/huggingface/datasets/issues/2180/events | https://github.com/huggingface/datasets/pull/2180 | 852,258,635 | MDExOlB1bGxSZXF1ZXN0NjEwNTQxOTA2 | 2,180 | Add tel to xtreme tatoeba | [] | closed | false | null | 0 | 2021-04-07T10:23:15Z | 2021-04-07T15:50:35Z | 2021-04-07T15:50:34Z | null | This should fix issue #2149 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2180/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2180/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2180.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2180",
"merged_at": "2021-04-07T15:50:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2180.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2180"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1591 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1591/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1591/comments | https://api.github.com/repos/huggingface/datasets/issues/1591/events | https://github.com/huggingface/datasets/issues/1591 | 769,383,714 | MDU6SXNzdWU3NjkzODM3MTQ= | 1,591 | IWSLT-17 Link Broken | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 2 | 2020-12-17T00:46:42Z | 2020-12-18T08:06:36Z | 2020-12-18T08:05:28Z | null | ```
FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1591/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1591/timeline | null | completed | null | null | false | [
"Sorry, this is a duplicate of #1287. Not sure why it didn't come up when I searched `iwslt` in the issues list.",
"Closing this since its a duplicate"
] |
https://api.github.com/repos/huggingface/datasets/issues/4505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4505/comments | https://api.github.com/repos/huggingface/datasets/issues/4505/events | https://github.com/huggingface/datasets/pull/4505 | 1,272,477,226 | PR_kwDODunzps45uH-o | 4,505 | Fix double dots in data files | [] | closed | false | null | 2 | 2022-06-15T16:31:04Z | 2022-06-15T17:15:58Z | 2022-06-15T17:05:53Z | null | As mentioned in https://github.com/huggingface/transformers/pull/17715 `data_files` can't find a file if the path contains double dots `/../`. This has been introduced in https://github.com/huggingface/datasets/pull/4412, by trying to ignore hidden files and directories (i.e. if they start with a dot)
I fixed this and added a test
cc @sgugger @ydshieh | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4505/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4505/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4505.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4505",
"merged_at": "2022-06-15T17:05:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4505.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4505"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI fails are unrelated to this PR (apparently something related to `seqeval` on windows) - merging :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5290/comments | https://api.github.com/repos/huggingface/datasets/issues/5290/events | https://github.com/huggingface/datasets/pull/5290 | 1,462,716,766 | PR_kwDODunzps5DnQsS | 5,290 | fix error where reading breaks when batch missing an assigned column feature | [] | open | false | null | 1 | 2022-11-24T03:53:46Z | 2022-11-25T03:21:54Z | null | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5290/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5290/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5290.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5290",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5290.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5290"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5290). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/5190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5190/comments | https://api.github.com/repos/huggingface/datasets/issues/5190/events | https://github.com/huggingface/datasets/issues/5190 | 1,433,014,626 | I_kwDODunzps5VahFi | 5,190 | `path` is `None` when downloading a custom audio dataset from the Hub | [] | closed | false | null | 1 | 2022-11-02T11:51:25Z | 2022-11-02T12:55:02Z | 2022-11-02T12:55:02Z | null | ### Describe the bug
I've created an [audio dataset](https://huggingface.co/datasets/lewtun/audio-test-push) using the `audiofolder` feature desribed in the [docs](https://huggingface.co/docs/datasets/audio_dataset#audiofolder) and then pushed it to the Hub.
Locally, I can see the `audio.path` feature is of the expected form `path/to/data_dir`, but when I download the dataset from the Hub, I see `audio.path` is `None`
Here's an example:
```python
from datasets import load_dataset
ds = load_dataset("lewtun/audio-test-push")
ds["train"][0]
# {
# "audio": {
# "path": None, <-- Is this expected?
# "array": array(
# [
# 3.97140226e-07,
# 7.30310290e-07,
# 7.56406735e-07,
# ...,
# -1.19636677e-01,
# -1.16811886e-01,
# -1.12441722e-01,
# ]
# ),
# "sampling_rate": 44100,
# },
# "song_id": 0,
# "genre_id": 0,
# "genre": "Electronic",
# }
```
Is this expected behaviour? If yes, feel free to close this issue as it's not a true bug then :)
### Steps to reproduce the bug
1. Create an audio dataset with the `audiofolder` feature
2. Push the dataset to the Hub with `push_to_hub()`
3. Download the Hub dataset and inspect the `audio.path` feature
### Expected behavior
`audio.path` points to the file associated with the audio data
### Environment info
- `datasets` version: 2.6.2.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.5.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5190/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5190/timeline | null | completed | null | null | false | [
"Hi! Yes, this is expected behavior - we do this as a security measure to not leak local paths (this info would be useless on other users' machines anyways) and only push audio bytes. \r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5691/comments | https://api.github.com/repos/huggingface/datasets/issues/5691/events | https://github.com/huggingface/datasets/pull/5691 | 1,649,737,526 | PR_kwDODunzps5NX08d | 5,691 | [docs] Compress data files | [] | closed | false | null | 3 | 2023-03-31T17:17:26Z | 2023-04-19T13:37:32Z | 2023-04-19T07:25:58Z | null | This PR addresses the comments in #5687 about compressing text file extensions before uploading to the Hub. Also clarified what "too large" means based on the GitLFS [docs](https://docs.github.com/en/repositories/working-with-files/managing-large-files/about-git-large-file-storage). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5691/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5691/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5691.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5691",
"merged_at": "2023-04-19T07:25:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5691.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5691"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"[Confirmed](https://huggingface.slack.com/archives/C02EMARJ65P/p1680541667004199) with the Hub team the file size limit for the Hugging Face Hub is 10MB :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006789 / 0.011353 (-0.004564) | 0.004935 / 0.011008 (-0.006073) | 0.096796 / 0.038508 (0.058288) | 0.032485 / 0.023109 (0.009376) | 0.335342 / 0.275898 (0.059444) | 0.354999 / 0.323480 (0.031519) | 0.005467 / 0.007986 (-0.002519) | 0.005267 / 0.004328 (0.000939) | 0.073988 / 0.004250 (0.069737) | 0.044402 / 0.037052 (0.007350) | 0.331156 / 0.258489 (0.072666) | 0.363595 / 0.293841 (0.069754) | 0.035301 / 0.128546 (-0.093245) | 0.012141 / 0.075646 (-0.063505) | 0.333164 / 0.419271 (-0.086107) | 0.048818 / 0.043533 (0.005286) | 0.331458 / 0.255139 (0.076319) | 0.343567 / 0.283200 (0.060367) | 0.094963 / 0.141683 (-0.046720) | 1.444383 / 1.452155 (-0.007772) | 1.520093 / 1.492716 (0.027377) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212311 / 0.018006 (0.194305) | 0.436413 / 0.000490 (0.435923) | 0.000333 / 0.000200 (0.000133) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026670 / 0.037411 (-0.010742) | 0.105774 / 0.014526 (0.091248) | 0.115796 / 0.176557 (-0.060760) | 0.176504 / 0.737135 (-0.560631) | 0.121883 / 0.296338 (-0.174456) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400783 / 0.215209 (0.185574) | 4.006608 / 2.077655 (1.928953) | 1.817659 / 1.504120 (0.313539) | 1.619777 / 1.541195 (0.078582) | 1.684247 / 1.468490 (0.215757) | 0.701116 / 4.584777 (-3.883661) | 3.684056 / 3.745712 (-0.061656) | 2.065258 / 5.269862 (-3.204603) | 1.425460 / 4.565676 (-3.140217) | 0.084519 / 0.424275 (-0.339757) | 0.011949 / 0.007607 (0.004342) | 0.496793 / 0.226044 (0.270749) | 4.978864 / 2.268929 (2.709935) | 2.303388 / 55.444624 (-53.141237) | 1.978341 / 6.876477 (-4.898135) | 2.055744 / 2.142072 (-0.086329) | 0.832022 / 4.805227 (-3.973206) | 0.164715 / 6.500664 (-6.335949) | 0.062701 / 0.075469 (-0.012768) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.178723 / 1.841788 (-0.663065) | 14.583986 / 8.074308 (6.509678) | 14.189402 / 10.191392 (3.998010) | 0.183867 / 0.680424 (-0.496557) | 0.017565 / 0.534201 (-0.516636) | 0.421345 / 0.579283 (-0.157938) | 0.420235 / 0.434364 (-0.014129) | 0.496758 / 0.540337 (-0.043580) | 0.591558 / 1.386936 (-0.795378) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007019 / 0.011353 (-0.004334) | 0.004996 / 0.011008 (-0.006012) | 0.073345 / 0.038508 (0.034836) | 0.033077 / 0.023109 (0.009968) | 0.335954 / 0.275898 (0.060056) | 0.372616 / 0.323480 (0.049136) | 0.005678 / 0.007986 (-0.002308) | 0.003906 / 0.004328 (-0.000423) | 0.072841 / 0.004250 (0.068591) | 0.046829 / 0.037052 (0.009777) | 0.335177 / 0.258489 (0.076688) | 0.382862 / 0.293841 (0.089021) | 0.038406 / 0.128546 (-0.090141) | 0.012110 / 0.075646 (-0.063536) | 0.085796 / 0.419271 (-0.333476) | 0.049896 / 0.043533 (0.006363) | 0.338232 / 0.255139 (0.083093) | 0.361054 / 0.283200 (0.077855) | 0.103171 / 0.141683 (-0.038512) | 1.556692 / 1.452155 (0.104538) | 1.540023 / 1.492716 (0.047306) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223705 / 0.018006 (0.205699) | 0.438771 / 0.000490 (0.438282) | 0.002838 / 0.000200 (0.002639) | 0.000081 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028423 / 0.037411 (-0.008988) | 0.110560 / 0.014526 (0.096035) | 0.121629 / 0.176557 (-0.054928) | 0.173638 / 0.737135 (-0.563498) | 0.127062 / 0.296338 (-0.169277) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425806 / 0.215209 (0.210597) | 4.251051 / 2.077655 (2.173397) | 2.059735 / 1.504120 (0.555615) | 1.864886 / 1.541195 (0.323692) | 1.941553 / 1.468490 (0.473063) | 0.700084 / 4.584777 (-3.884693) | 3.753150 / 3.745712 (0.007438) | 3.218606 / 5.269862 (-2.051256) | 1.439648 / 4.565676 (-3.126028) | 0.085239 / 0.424275 (-0.339037) | 0.012026 / 0.007607 (0.004419) | 0.521564 / 0.226044 (0.295520) | 5.217902 / 2.268929 (2.948973) | 2.557831 / 55.444624 (-52.886793) | 2.240223 / 6.876477 (-4.636254) | 2.364664 / 2.142072 (0.222591) | 0.825884 / 4.805227 (-3.979343) | 0.167800 / 6.500664 (-6.332864) | 0.063552 / 0.075469 (-0.011917) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.255532 / 1.841788 (-0.586256) | 14.747783 / 8.074308 (6.673475) | 14.352263 / 10.191392 (4.160871) | 0.143659 / 0.680424 (-0.536765) | 0.017517 / 0.534201 (-0.516684) | 0.419863 / 0.579283 (-0.159421) | 0.416674 / 0.434364 (-0.017690) | 0.485694 / 0.540337 (-0.054643) | 0.584810 / 1.386936 (-0.802126) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3223/comments | https://api.github.com/repos/huggingface/datasets/issues/3223/events | https://github.com/huggingface/datasets/pull/3223 | 1,046,445,507 | PR_kwDODunzps4uLb1E | 3,223 | Update BibTeX entry | [] | closed | false | null | 0 | 2021-11-06T06:41:52Z | 2021-11-06T07:06:38Z | 2021-11-06T07:06:38Z | null | Update BibTeX entry. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3223/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3223/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3223.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3223",
"merged_at": "2021-11-06T07:06:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3223.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3223"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1813/comments | https://api.github.com/repos/huggingface/datasets/issues/1813/events | https://github.com/huggingface/datasets/pull/1813 | 800,435,973 | MDExOlB1bGxSZXF1ZXN0NTY2OTIxNDcz | 1,813 | Support future datasets | [] | closed | false | null | 0 | 2021-02-03T15:26:49Z | 2021-02-05T10:33:48Z | 2021-02-05T10:33:47Z | null | If a dataset is available at the version of the local installation of `datasets` (e.g. 1.2.0), then loading this dataset means loading the script at this version.
However when trying to load a dataset that is only available on master, currently users have to specify `script_version="master"` in `load_dataset` to make it work.
However we could automatically get the dataset from master instead in this case.
I added this feature in this PR.
I also added a warning if a dataset is not available at the version of the local installation of `datasets` but is loaded from master:
```python
>>> load_dataset("silicone", "dyda_da")
Couldn't find file locally at silicone/silicone.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/silicone/silicone.py.
The file was picked from the master branch on github instead at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/silicone/silicone.py.
Downloading and preparing dataset silicone/dyda_da (download: 8.46 MiB, generated: 9.39 MiB, post-processed: Unknown size, total: 17.86 MiB) to /Users/quentinlhoest/.cache/huggingface/datasets/silicone/dyda_da/1.0.0/d41d8c0b73c6df035b1369c45774418f0051163ea689b5502b8bda783adf6342...
...
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1813/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1813/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1813.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1813",
"merged_at": "2021-02-05T10:33:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1813.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1813"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3651/comments | https://api.github.com/repos/huggingface/datasets/issues/3651/events | https://github.com/huggingface/datasets/pull/3651 | 1,118,597,647 | PR_kwDODunzps4xy3De | 3,651 | Update link in wiki_bio dataset | [] | closed | false | null | 2 | 2022-01-30T16:28:54Z | 2022-01-31T14:50:48Z | 2022-01-31T08:38:09Z | null | Fixes #3580 and makes the wiki_bio dataset work again. I changed the link and some documentation, and all the tests pass. Thanks @lhoestq for uploading the dataset to the HuggingFace data bucket.
@lhoestq -- all the tests pass, but I'm still not able to import the dataset, as the old Google Drive link is cached somewhere:
```python
>>> from datasets import load_dataset
load_dataset("wiki_bio>>> load_dataset("wiki_bio")
Using custom data configuration default
Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to /home/jxm3/.cache/huggingface/datasets/wiki_bio/default/1.1.0/5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...
Traceback (most recent call last):
...
File "/home/jxm3/random/datasets/src/datasets/utils/file_utils.py", line 612, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil
```
what do I have to do to invalidate the cache and actually import the dataset? It's clearly set up correctly, since the data is downloaded and processed by the tests.
As an aside, this caching-loading-scripts behavior makes for a really bad developer experience. I just wasted an hour trying to figure out where the caching was happening and how to disable it, and I don't know. All I wanted to do was update the link and submit a pull request! I recommend that you all either change this behavior (i.e. updating the link to a dataset should "just work") or document it, since I couldn't find any information about this in the contributing.md or readme or anywhere else! Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3651/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3651/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3651.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3651",
"merged_at": "2022-01-31T08:38:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3651.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3651"
} | true | [
"> all the tests pass, but I'm still not able to import the dataset\r\n\r\nSince it's not merged on `master` yet, you have to provide the path to your local `wiki_bio.py` to use it.\r\nIndeed the library downloads the dataset files from `master` if you have a dev installation of the library.\r\n\r\nI agree it would be nice to change that, and use the local dataset scripts from the `datasets` directory - it feels definitely more natural.",
"Cool, thanks for your help and I agree!"
] |
https://api.github.com/repos/huggingface/datasets/issues/6067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6067/comments | https://api.github.com/repos/huggingface/datasets/issues/6067/events | https://github.com/huggingface/datasets/pull/6067 | 1,819,919,025 | PR_kwDODunzps5WT7EQ | 6,067 | fix tqdm lock | [] | closed | false | null | 3 | 2023-07-25T09:32:16Z | 2023-07-25T10:02:43Z | 2023-07-25T09:54:12Z | null | close https://github.com/huggingface/datasets/issues/6066 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6067/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6067/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6067.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6067",
"merged_at": "2023-07-25T09:54:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6067.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6067"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006578 / 0.011353 (-0.004775) | 0.003953 / 0.011008 (-0.007055) | 0.084417 / 0.038508 (0.045908) | 0.076729 / 0.023109 (0.053620) | 0.315369 / 0.275898 (0.039471) | 0.347012 / 0.323480 (0.023533) | 0.005299 / 0.007986 (-0.002686) | 0.003321 / 0.004328 (-0.001007) | 0.063954 / 0.004250 (0.059704) | 0.055810 / 0.037052 (0.018758) | 0.317651 / 0.258489 (0.059162) | 0.352603 / 0.293841 (0.058762) | 0.031355 / 0.128546 (-0.097192) | 0.008493 / 0.075646 (-0.067153) | 0.287295 / 0.419271 (-0.131977) | 0.052716 / 0.043533 (0.009183) | 0.316410 / 0.255139 (0.061271) | 0.328893 / 0.283200 (0.045693) | 0.024005 / 0.141683 (-0.117678) | 1.520333 / 1.452155 (0.068178) | 1.601268 / 1.492716 (0.108552) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205144 / 0.018006 (0.187138) | 0.459160 / 0.000490 (0.458670) | 0.000321 / 0.000200 (0.000121) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027503 / 0.037411 (-0.009908) | 0.081476 / 0.014526 (0.066950) | 0.096759 / 0.176557 (-0.079798) | 0.157888 / 0.737135 (-0.579247) | 0.094592 / 0.296338 (-0.201746) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.384762 / 0.215209 (0.169553) | 3.843503 / 2.077655 (1.765849) | 1.921685 / 1.504120 (0.417565) | 1.752441 / 1.541195 (0.211246) | 1.822105 / 1.468490 (0.353615) | 0.480243 / 4.584777 (-4.104534) | 3.577220 / 3.745712 (-0.168492) | 5.047560 / 5.269862 (-0.222302) | 2.988008 / 4.565676 (-1.577669) | 0.056430 / 0.424275 (-0.367845) | 0.007180 / 0.007607 (-0.000427) | 0.458113 / 0.226044 (0.232069) | 4.584096 / 2.268929 (2.315168) | 2.395307 / 55.444624 (-53.049317) | 2.080530 / 6.876477 (-4.795947) | 2.239000 / 2.142072 (0.096927) | 0.575822 / 4.805227 (-4.229405) | 0.133303 / 6.500664 (-6.367361) | 0.059449 / 0.075469 (-0.016020) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.256496 / 1.841788 (-0.585291) | 19.651614 / 8.074308 (11.577306) | 14.232480 / 10.191392 (4.041088) | 0.146461 / 0.680424 (-0.533963) | 0.018632 / 0.534201 (-0.515569) | 0.399844 / 0.579283 (-0.179439) | 0.411225 / 0.434364 (-0.023139) | 0.458203 / 0.540337 (-0.082135) | 0.669916 / 1.386936 (-0.717020) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006463 / 0.011353 (-0.004890) | 0.003898 / 0.011008 (-0.007110) | 0.064037 / 0.038508 (0.025529) | 0.071982 / 0.023109 (0.048873) | 0.361936 / 0.275898 (0.086038) | 0.393165 / 0.323480 (0.069685) | 0.005207 / 0.007986 (-0.002779) | 0.003231 / 0.004328 (-0.001098) | 0.064318 / 0.004250 (0.060068) | 0.055776 / 0.037052 (0.018724) | 0.383087 / 0.258489 (0.124598) | 0.402428 / 0.293841 (0.108587) | 0.031587 / 0.128546 (-0.096959) | 0.008527 / 0.075646 (-0.067119) | 0.070495 / 0.419271 (-0.348777) | 0.048806 / 0.043533 (0.005273) | 0.369932 / 0.255139 (0.114793) | 0.385268 / 0.283200 (0.102068) | 0.023183 / 0.141683 (-0.118500) | 1.491175 / 1.452155 (0.039020) | 1.534191 / 1.492716 (0.041475) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.224526 / 0.018006 (0.206520) | 0.445460 / 0.000490 (0.444970) | 0.003612 / 0.000200 (0.003412) | 0.000089 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029829 / 0.037411 (-0.007583) | 0.087951 / 0.014526 (0.073425) | 0.100069 / 0.176557 (-0.076487) | 0.154944 / 0.737135 (-0.582192) | 0.101271 / 0.296338 (-0.195067) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.412385 / 0.215209 (0.197175) | 4.108038 / 2.077655 (2.030384) | 2.163578 / 1.504120 (0.659459) | 2.031934 / 1.541195 (0.490740) | 2.155857 / 1.468490 (0.687367) | 0.481132 / 4.584777 (-4.103645) | 3.620868 / 3.745712 (-0.124844) | 5.222175 / 5.269862 (-0.047687) | 3.115637 / 4.565676 (-1.450039) | 0.056480 / 0.424275 (-0.367795) | 0.007761 / 0.007607 (0.000154) | 0.483553 / 0.226044 (0.257509) | 4.830087 / 2.268929 (2.561159) | 2.629919 / 55.444624 (-52.814705) | 2.327551 / 6.876477 (-4.548926) | 2.539934 / 2.142072 (0.397861) | 0.587963 / 4.805227 (-4.217265) | 0.131085 / 6.500664 (-6.369579) | 0.060807 / 0.075469 (-0.014662) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.350003 / 1.841788 (-0.491785) | 19.491713 / 8.074308 (11.417405) | 14.030429 / 10.191392 (3.839037) | 0.174762 / 0.680424 (-0.505662) | 0.018523 / 0.534201 (-0.515678) | 0.394946 / 0.579283 (-0.184337) | 0.407652 / 0.434364 (-0.026712) | 0.465806 / 0.540337 (-0.074531) | 0.605417 / 1.386936 (-0.781519) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006235 / 0.011353 (-0.005118) | 0.003675 / 0.011008 (-0.007333) | 0.080680 / 0.038508 (0.042171) | 0.064378 / 0.023109 (0.041268) | 0.394312 / 0.275898 (0.118414) | 0.428143 / 0.323480 (0.104663) | 0.004794 / 0.007986 (-0.003191) | 0.002899 / 0.004328 (-0.001429) | 0.062592 / 0.004250 (0.058342) | 0.050957 / 0.037052 (0.013904) | 0.396831 / 0.258489 (0.138342) | 0.438280 / 0.293841 (0.144439) | 0.027743 / 0.128546 (-0.100804) | 0.008068 / 0.075646 (-0.067578) | 0.262541 / 0.419271 (-0.156730) | 0.060837 / 0.043533 (0.017304) | 0.397941 / 0.255139 (0.142802) | 0.417012 / 0.283200 (0.133813) | 0.030153 / 0.141683 (-0.111530) | 1.477115 / 1.452155 (0.024960) | 1.516642 / 1.492716 (0.023926) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.178032 / 0.018006 (0.160026) | 0.445775 / 0.000490 (0.445286) | 0.004275 / 0.000200 (0.004075) | 0.000080 / 0.000054 (0.000026) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025025 / 0.037411 (-0.012386) | 0.074113 / 0.014526 (0.059587) | 0.083814 / 0.176557 (-0.092743) | 0.148860 / 0.737135 (-0.588275) | 0.085408 / 0.296338 (-0.210931) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393714 / 0.215209 (0.178505) | 3.936589 / 2.077655 (1.858934) | 1.910501 / 1.504120 (0.406381) | 1.729670 / 1.541195 (0.188475) | 1.777647 / 1.468490 (0.309156) | 0.499532 / 4.584777 (-4.085245) | 3.002385 / 3.745712 (-0.743327) | 2.906916 / 5.269862 (-2.362945) | 1.883321 / 4.565676 (-2.682356) | 0.057546 / 0.424275 (-0.366730) | 0.006492 / 0.007607 (-0.001115) | 0.463605 / 0.226044 (0.237560) | 4.620215 / 2.268929 (2.351287) | 2.399021 / 55.444624 (-53.045603) | 2.182962 / 6.876477 (-4.693514) | 2.357344 / 2.142072 (0.215272) | 0.583946 / 4.805227 (-4.221282) | 0.124644 / 6.500664 (-6.376021) | 0.060831 / 0.075469 (-0.014638) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.276412 / 1.841788 (-0.565375) | 18.462522 / 8.074308 (10.388214) | 13.877375 / 10.191392 (3.685983) | 0.150584 / 0.680424 (-0.529840) | 0.016675 / 0.534201 (-0.517526) | 0.331711 / 0.579283 (-0.247573) | 0.366659 / 0.434364 (-0.067705) | 0.396400 / 0.540337 (-0.143938) | 0.555418 / 1.386936 (-0.831518) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005995 / 0.011353 (-0.005358) | 0.003610 / 0.011008 (-0.007399) | 0.061802 / 0.038508 (0.023294) | 0.059265 / 0.023109 (0.036156) | 0.392628 / 0.275898 (0.116730) | 0.413143 / 0.323480 (0.089663) | 0.004687 / 0.007986 (-0.003299) | 0.002843 / 0.004328 (-0.001486) | 0.061932 / 0.004250 (0.057682) | 0.049466 / 0.037052 (0.012413) | 0.402718 / 0.258489 (0.144229) | 0.415039 / 0.293841 (0.121198) | 0.027352 / 0.128546 (-0.101194) | 0.007965 / 0.075646 (-0.067682) | 0.067456 / 0.419271 (-0.351815) | 0.042336 / 0.043533 (-0.001196) | 0.405543 / 0.255139 (0.150404) | 0.403209 / 0.283200 (0.120010) | 0.021459 / 0.141683 (-0.120224) | 1.442861 / 1.452155 (-0.009293) | 1.491213 / 1.492716 (-0.001503) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.248225 / 0.018006 (0.230219) | 0.434174 / 0.000490 (0.433684) | 0.001973 / 0.000200 (0.001773) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025475 / 0.037411 (-0.011936) | 0.077865 / 0.014526 (0.063339) | 0.086980 / 0.176557 (-0.089577) | 0.143682 / 0.737135 (-0.593453) | 0.088634 / 0.296338 (-0.207705) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417591 / 0.215209 (0.202382) | 4.168700 / 2.077655 (2.091045) | 2.335743 / 1.504120 (0.831623) | 2.208174 / 1.541195 (0.666980) | 2.256658 / 1.468490 (0.788168) | 0.503164 / 4.584777 (-4.081613) | 3.026667 / 3.745712 (-0.719045) | 4.496675 / 5.269862 (-0.773187) | 2.741049 / 4.565676 (-1.824628) | 0.057781 / 0.424275 (-0.366494) | 0.006810 / 0.007607 (-0.000797) | 0.490803 / 0.226044 (0.264759) | 4.914369 / 2.268929 (2.645441) | 2.594250 / 55.444624 (-52.850375) | 2.274552 / 6.876477 (-4.601925) | 2.397529 / 2.142072 (0.255456) | 0.593008 / 4.805227 (-4.212220) | 0.126194 / 6.500664 (-6.374470) | 0.062261 / 0.075469 (-0.013208) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.357561 / 1.841788 (-0.484227) | 18.622995 / 8.074308 (10.548687) | 14.142569 / 10.191392 (3.951177) | 0.146527 / 0.680424 (-0.533897) | 0.016863 / 0.534201 (-0.517338) | 0.336219 / 0.579283 (-0.243064) | 0.348650 / 0.434364 (-0.085714) | 0.385958 / 0.540337 (-0.154380) | 0.517958 / 1.386936 (-0.868978) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/181/comments | https://api.github.com/repos/huggingface/datasets/issues/181/events | https://github.com/huggingface/datasets/issues/181 | 622,634,420 | MDU6SXNzdWU2MjI2MzQ0MjA= | 181 | Cannot upload my own dataset | [] | closed | false | null | 6 | 2020-05-21T16:45:52Z | 2020-06-18T22:14:42Z | 2020-06-18T22:14:42Z | null | I look into `nlp-cli` and `user.py` to learn how to upload my own data.
It is supposed to work like this
- Register to get username, password at huggingface.co
- `nlp-cli login` and type username, passworld
- I have a single file to upload at `./ttc/ttc_freq_extra.csv`
- `nlp-cli upload ttc/ttc_freq_extra.csv`
But I got this error.
```
2020-05-21 16:33:52.722464: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
About to upload file /content/ttc/ttc_freq_extra.csv to S3 under filename ttc/ttc_freq_extra.csv and namespace korakot
Proceed? [Y/n] y
Uploading... This might take a while if files are large
Traceback (most recent call last):
File "/usr/local/bin/nlp-cli", line 33, in <module>
service.run()
File "/usr/local/lib/python3.6/dist-packages/nlp/commands/user.py", line 234, in run
token=token, filename=filename, filepath=filepath, organization=self.args.organization
File "/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py", line 141, in presign_and_upload
urls = self.presign(token, filename=filename, organization=organization)
File "/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py", line 132, in presign
return PresignedUrl(**d)
TypeError: __init__() got an unexpected keyword argument 'cdn'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/181/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/181/timeline | null | completed | null | null | false | [
"It's my misunderstanding. I cannot just upload a csv. I need to write a dataset loading script too.",
"I now try with the sample `datasets/csv` folder. \r\n\r\n nlp-cli upload csv\r\n\r\nThe error is still the same\r\n\r\n```\r\n2020-05-21 17:20:56.394659: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nAbout to upload file /content/csv/csv.py to S3 under filename csv/csv.py and namespace korakot\r\nAbout to upload file /content/csv/dummy/0.0.0/dummy_data.zip to S3 under filename csv/dummy/0.0.0/dummy_data.zip and namespace korakot\r\nProceed? [Y/n] y\r\nUploading... This might take a while if files are large\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/nlp-cli\", line 33, in <module>\r\n service.run()\r\n File \"/usr/local/lib/python3.6/dist-packages/nlp/commands/user.py\", line 234, in run\r\n token=token, filename=filename, filepath=filepath, organization=self.args.organization\r\n File \"/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py\", line 141, in presign_and_upload\r\n urls = self.presign(token, filename=filename, organization=organization)\r\n File \"/usr/local/lib/python3.6/dist-packages/nlp/hf_api.py\", line 132, in presign\r\n return PresignedUrl(**d)\r\nTypeError: __init__() got an unexpected keyword argument 'cdn'\r\n```\r\n",
"We haven't tested the dataset upload feature yet cc @julien-c \r\nThis is on our short/mid-term roadmap though",
"Even if I fix the `TypeError: __init__() got an unexpected keyword argument 'cdn'` error, it looks like it still uploads to `https://s3.amazonaws.com/models.huggingface.co/bert/<namespace>/<dataset_name>` instead of `https://s3.amazonaws.com/datasets.huggingface.co/nlp/<namespace>/<dataset_name>`",
"@lhoestq The endpoints in https://github.com/huggingface/nlp/blob/master/src/nlp/hf_api.py should be (depending on the type of file):\r\n```\r\nPOST /api/datasets/presign\r\nGET /api/datasets/listObjs\r\nDELETE /api/datasets/deleteObj\r\nPOST /api/metrics/presign \r\nGET /api/metrics/listObjs\r\nDELETE /api/metrics/deleteObj\r\n```\r\n\r\nIn addition to this, @thomwolf cleaned up the objects with dataclasses but you should revert this and re-align to the hf_api that's in this branch of transformers: https://github.com/huggingface/transformers/pull/4632 (so that potential new JSON attributes in the API output don't break existing versions of any library)",
"New commands are\r\n```\r\nnlp-cli upload_dataset <path/to/dataset>\r\nnlp-cli upload_metric <path/to/metric>\r\nnlp-cli s3_datasets {rm, ls}\r\nnlp-cli s3_metrics {rm, ls}\r\n```\r\nClosing this issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/1244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1244/comments | https://api.github.com/repos/huggingface/datasets/issues/1244/events | https://github.com/huggingface/datasets/pull/1244 | 758,384,417 | MDExOlB1bGxSZXF1ZXN0NTMzNTY1ODMz | 1,244 | arxiv dataset added | [] | closed | false | null | 0 | 2020-12-07T10:32:54Z | 2020-12-07T11:04:23Z | 2020-12-07T11:04:23Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1244/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1244/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1244.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1244",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1244.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1244"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/3476 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3476/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3476/comments | https://api.github.com/repos/huggingface/datasets/issues/3476/events | https://github.com/huggingface/datasets/pull/3476 | 1,087,622,872 | PR_kwDODunzps4wOZ8a | 3,476 | Extend support for streaming datasets that use ET.parse | [] | closed | false | null | 0 | 2021-12-23T11:18:46Z | 2021-12-23T15:34:30Z | 2021-12-23T15:34:30Z | null | This PR extends the support in streaming mode for datasets that use `ET.parse`, by patching the function.
This PR adds support for streaming mode to datasets:
1. ami
1. assin
1. assin2
1. counter
1. enriched_web_nlg
1. europarl_bilingual
1. hyperpartisan_news_detection
1. polsum
1. qa4mre
1. quail
1. ted_talks_iwslt
1. udhr
1. web_nlg
1. winograd_wsc
CC: @severo | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3476/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3476/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3476.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3476",
"merged_at": "2021-12-23T15:34:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3476.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3476"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2452/comments | https://api.github.com/repos/huggingface/datasets/issues/2452/events | https://github.com/huggingface/datasets/issues/2452 | 913,603,877 | MDU6SXNzdWU5MTM2MDM4Nzc= | 2,452 | MRPC test set differences between torch and tensorflow datasets | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-06-07T14:20:26Z | 2021-06-07T14:34:32Z | 2021-06-07T14:34:32Z | null | ## Describe the bug
When using `load_dataset("glue", "mrpc")` to load the MRPC dataset, the test set includes the labels. When using `tensorflow_datasets.load('glue/{}'.format('mrpc'))` to load the dataset the test set does not contain the labels. There should be consistency between torch and tensorflow ways of importing the GLUE datasets.
## Steps to reproduce the bug
Minimal working code
```python
from datasets import load_dataset
import tensorflow as tf
import tensorflow_datasets
# torch
dataset = load_dataset("glue", "mrpc")
# tf
data = tensorflow_datasets.load('glue/{}'.format('mrpc'))
data = list(data['test'].as_numpy_iterator())
for i in range(40,50):
tf_sentence1 = data[i]['sentence1'].decode("utf-8")
tf_sentence2 = data[i]['sentence2'].decode("utf-8")
tf_label = data[i]['label']
index = data[i]['idx']
print('Index {}'.format(index))
torch_sentence1 = dataset['test']['sentence1'][index]
torch_sentence2 = dataset['test']['sentence2'][index]
torch_label = dataset['test']['label'][index]
print('Tensorflow: \n\tSentence1 {}\n\tSentence2 {}\n\tLabel {}'.format(tf_sentence1, tf_sentence2, tf_label))
print('Torch: \n\tSentence1 {}\n\tSentence2 {}\n\tLabel {}'.format(torch_sentence1, torch_sentence2, torch_label))
```
Sample output
```
Index 954
Tensorflow:
Sentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws .
Sentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws .
Label -1
Torch:
Sentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws .
Sentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws .
Label 1
Index 711
Tensorflow:
Sentence1 Others keep records sealed for as little as five years or as much as 30 .
Sentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years .
Label -1
Torch:
Sentence1 Others keep records sealed for as little as five years or as much as 30 .
Sentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years .
Label 0
```
## Expected results
I would expect the datasets to be independent of whether I am working with torch or tensorflow.
## Actual results
Test set labels are provided in the `datasets.load_datasets()` for MRPC. However MRPC is the only task where the test set labels are not -1.
## Environment info
- `datasets` version: 1.7.0
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2452/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2452/timeline | null | completed | null | null | false | [
"Realized that `tensorflow_datasets` is not provided by Huggingface and should therefore raise the issue there."
] |
https://api.github.com/repos/huggingface/datasets/issues/3119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3119/comments | https://api.github.com/repos/huggingface/datasets/issues/3119/events | https://github.com/huggingface/datasets/issues/3119 | 1,031,328,044 | I_kwDODunzps49eNEs | 3,119 | Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2021-10-20T12:05:07Z | 2021-10-22T19:00:52Z | 2021-10-22T08:30:22Z | null | ## Adding a Dataset
- **Name:** *openslr**
- **Description:** *Data set which contains male and female recordings of English from various dialects of the UK and Ireland.*
- **Paper:** *https://www.openslr.org/resources/83/about.html*
- **Data:** *Eleven separate data files can be found via https://www.openslr.org/resources/83/*
- **Motivation:** *Increase english ASR data with UK and Irish dialects*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
The *openslr* dataset already exists, this will add additional subset, *SLR83*. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3119/timeline | null | completed | null | null | false | [
"Ugh. The index files for SLR83 are CSV, not TSV. I need to add logic to process these index files."
] |
https://api.github.com/repos/huggingface/datasets/issues/1939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1939/comments | https://api.github.com/repos/huggingface/datasets/issues/1939/events | https://github.com/huggingface/datasets/issues/1939 | 815,680,510 | MDU6SXNzdWU4MTU2ODA1MTA= | 1,939 | [firewalled env] OFFLINE mode | [] | closed | false | null | 7 | 2021-02-24T17:13:42Z | 2021-03-05T05:09:54Z | 2021-03-05T05:09:54Z | null | This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 possible ways to going about it.
## 1. Manual
manually prepare data and metrics files, that is transfer to the firewalled instance the dataset and the metrics and run:
```
DATASETS_OFFLINE=1 run_seq2seq.py --train_file xyz.csv --validation_file xyz.csv ...
```
`datasets` must not make any network calls and if there is a logic to do that and something is missing it should assert that this or that action requires network and therefore it can't proceed.
## 2. Automatic
In some clouds one can prepare a datastorage ahead of time with a normal networked environment but which doesn't have gpus and then one switches to the gpu instance which is firewalled, but it can access all the cached data. This is the ideal situation, since in this scenario we don't have to do anything manually, but simply run the same application twice:
1. on the non-firewalled instance:
```
run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ...
```
which should download and cached everything.
2. and then immediately after on the firewalled instance, which shares the same filesystem
```
DATASETS_OFFLINE=1 run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ...
```
and the metrics and datasets should be cached by the invocation number 1 and any network calls be skipped and if the logic is missing data it should assert and not try to fetch any data from online.
## Common Issues
1. for example currently `datasets` tries to look up online datasets if the files contain json or csv, despite the paths already provided
```
if dataset and path in _PACKAGED_DATASETS_MODULES:
```
2. it has an issue with metrics. e.g. I had to manually copy `rouge/rouge.py` from the `datasets` repo to the current dir - or it was hanging.
I had to comment out `head_hf_s3(...)` calls to make things work. So all those `try: head_hf_s3(...)` shouldn't be tried with `DATASETS_OFFLINE=1`
Here is the corresponding issue for `transformers`: https://github.com/huggingface/transformers/issues/10379
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1939/timeline | null | completed | null | null | false | [
"Thanks for reporting and for all the details and suggestions.\r\n\r\nI'm totally in favor of having a HF_DATASETS_OFFLINE env variable to disable manually all the connection checks, remove retries etc.\r\n\r\nMoreover you may know that the use case that you are mentioning is already supported from `datasets` 1.3.0, i.e. you already can:\r\n- first load datasets and metrics from an instance with internet connection\r\n- then be able to reload datasets and metrics from another instance without connection (as long as the filesystem is shared)\r\n\r\nThis is already implemented, but currently it only works if the requests return a `ConnectionError` (or any error actually). Not sure why it would hang instead of returning an error.\r\n\r\nMaybe this is just a issue with the timeout value being not set or too high ?\r\nIs there a way I can have access to one of the instances on which there's this issue (we can discuss this offline) ?\r\n",
"I'm on master, so using all the available bells and whistles already.\r\n\r\nIf you look at the common issues - it for example tries to look up files if they appear in `_PACKAGED_DATASETS_MODULES` which it shouldn't do.\r\n\r\n--------------\r\n\r\nYes, there is a nuance to it. As I mentioned it's firewalled - that is it has a network but making any calls outside - it just hangs in:\r\n\r\n```\r\nsin_addr=inet_addr(\"xx.xx.xx.xx\")}, [28->16]) = 0\r\nclose(5) = 0\r\nsocket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 5\r\nconnect(5, {sa_family=AF_INET, sin_port=htons(3128), sin_addr=inet_addr(\"yy.yy.yy.yy\")}, 16^C) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)\r\n```\r\nuntil it times out.\r\n\r\nThat's why we need to be able to tell the software that there is no network to rely on even if there is one (good for testing too).\r\n\r\nSo what I'm thinking is that this is a simple matter of pre-ambling any network call wrappers with:\r\n\r\n```\r\nif HF_DATASETS_OFFLINE:\r\n assert \"Attempting to make a network call under Offline mode\"\r\n```\r\n\r\nand then fixing up if there is anything else to fix to make it work.\r\n\r\n--------------\r\n\r\nOtherwise I think the only other problem I encountered is that we need to find a way to pre-cache metrics, for some reason it's not caching it and wanting to fetch it from online.\r\n\r\nWhich is extra strange since it already has those files in the `datasets` repo itself that is on the filesystem.\r\n\r\nThe workaround I had to do is to copy `rouge/rouge.py` (with the parent folder) from the datasets repo to the current dir - and then it proceeded.",
"Ok understand better the hanging issue.\r\nI guess catching connection errors is not enough, we should also avoid all the hangings.\r\nCurrently the offline mode tests are only done by simulating an instant connection fail that returns an error, let's have another connection mock that hangs instead.\r\n\r\nI'll also take a look at why you had to do this for `rouge`.\r\n",
"FWIW, I think instant failure on the behalf of a network call is the simplest solution to correctly represent the environment and having the caller to sort it out is the next thing to do, since here it is the case of having no functional network, it's just that the software doesn't know this is the case, because there is some network. So we just need to help it to bail out instantly rather than hang waiting for it to time out. And afterwards everything else you said.",
"Update on this: \r\n\r\nI managed to create a mock environment for tests that makes the connections hang until timeout.\r\nI managed to reproduce the issue you're having in this environment.\r\n\r\nI'll update the offline test cases to also test the robustness to connection hangings, and make sure we set proper timeouts where it's needed in the code. This should cover the _automatic_ section you mentioned.",
"Fabulous! I'm glad you were able to reproduce the issues, @lhoestq!",
"I lost access to the firewalled setup, but I emulated it with:\r\n\r\n```\r\nsudo ufw enable\r\nsudo ufw default deny outgoing\r\n```\r\n(thanks @mfuntowicz)\r\n\r\nI was able to test `HF_DATASETS_OFFLINE=1` and it worked great - i.e. didn't try to reach out with it and used the cached files instead.\r\n\r\nThank you!"
] |
https://api.github.com/repos/huggingface/datasets/issues/312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/312/comments | https://api.github.com/repos/huggingface/datasets/issues/312/events | https://github.com/huggingface/datasets/issues/312 | 645,025,561 | MDU6SXNzdWU2NDUwMjU1NjE= | 312 | [Feature request] Add `shard()` method to dataset | [] | closed | false | null | 2 | 2020-06-24T22:48:33Z | 2020-07-06T12:35:36Z | 2020-07-06T12:35:36Z | null | Currently, to shard a dataset into 10 pieces on different ranks, you can run
```python
rank = 3 # for example
size = 10
dataset = nlp.load_dataset('wikitext', 'wikitext-2-raw-v1', split=f"train[{rank*10}%:{(rank+1)*10}%]")
```
However, this breaks down if you have a number of ranks that doesn't divide cleanly into 100, such as 64 ranks. Is there interest in adding a method shard() that looks like this?
```python
rank = 3
size = 64
dataset = nlp.load_dataset("wikitext", "wikitext-2-raw-v1", split="train").shard(rank=rank, size=size)
```
TensorFlow has a similar API: https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard. I'd be happy to contribute this code. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/312/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/312/timeline | null | completed | null | null | false | [
"Hi Jared,\r\nInteresting, thanks for raising this question. You can also do that after loading with `dataset.select()` or `dataset.filter()` which let you keep only a specific subset of rows in a dataset.\r\nWhat is your use-case for sharding?",
"Thanks for the pointer to those functions! It's still a little more verbose since you have to manually calculate which ids each rank would keep, but definitely works.\r\n\r\nMy use case is multi-node, multi-GPU training and avoiding global batches of duplicate elements. I'm using horovod. You can shuffle indices, or set random seeds, but explicitly sharding the dataset up front is the safest and clearest way I've found to do so."
] |
https://api.github.com/repos/huggingface/datasets/issues/3760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3760/comments | https://api.github.com/repos/huggingface/datasets/issues/3760/events | https://github.com/huggingface/datasets/issues/3760 | 1,144,804,558 | I_kwDODunzps5EPFTO | 3,760 | Unable to view the Gradio flagged call back dataset | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 5 | 2022-02-19T17:45:08Z | 2022-03-22T07:12:11Z | 2022-03-22T07:12:11Z | null | ## Dataset viewer issue for '*savtadepth-flags*'
**Link:** *[savtadepth-flags](https://huggingface.co/datasets/kingabzpro/savtadepth-flags)*
*with the Gradio 2.8.1 the dataset viers stopped working. I tried to add values manually but its not working. The dataset is also not showing the link with the app https://huggingface.co/spaces/kingabzpro/savtadepth.*
Am I the one who added this dataset ? Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3760/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3760/timeline | null | completed | null | null | false | [
"Hi @kingabzpro.\r\n\r\nI think you need to create a loading script that creates the dataset from the CSV file and the image paths.\r\n\r\nAs example, you could have a look at the Food-101 dataset: https://huggingface.co/datasets/food101\r\n- Loading script: https://huggingface.co/datasets/food101/blob/main/food101.py\r\n\r\nOnce the loading script is created, the viewer will show a previsualization of your dataset. ",
"@albertvillanova I don't think this is the issue. I have created another dataset with similar files and format and it works. https://huggingface.co/datasets/kingabzpro/savtadepth-flags-V2",
"Yes, you are right, that was not the issue.\r\n\r\nJust take into account that sometimes the viewer can take some time until it shows the preview of the dataset.\r\nAfter some time, yours is finally properly shown: https://huggingface.co/datasets/kingabzpro/savtadepth-flags",
"The problem was resolved by deleted the dataset and creating new one with similar name and then clicking on flag button.",
"I think if you make manual changes to dataset the whole system breaks. "
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.