url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1787/comments | https://api.github.com/repos/huggingface/datasets/issues/1787/events | https://github.com/huggingface/datasets/pull/1787 | 795,485,842 | MDExOlB1bGxSZXF1ZXN0NTYyODI1NTI3 | 1,787 | Update the CommonGen citation information | [] | closed | false | null | 0 | 2021-01-27T22:12:47Z | 2021-01-28T13:56:29Z | 2021-01-28T13:56:29Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1787/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1787/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1787.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1787",
"merged_at": "2021-01-28T13:56:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1787.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1787"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/2716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2716/comments | https://api.github.com/repos/huggingface/datasets/issues/2716/events | https://github.com/huggingface/datasets/issues/2716 | 952,902,778 | MDU6SXNzdWU5NTI5MDI3Nzg= | 2,716 | Calling shuffle on IterableDataset will disable batching in case any functions were mapped | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-07-26T13:24:59Z | 2021-07-26T18:04:43Z | 2021-07-26T18:04:43Z | null | When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False`
I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/huggingface/datasets/blob/d25a0bf94d9f9a9aa6cabdf5b450b9c327d19729/src/datasets/iterable_dataset.py#L197) here as it is
`self.ex_iterable.shuffle_data_sources(seed), function=self.function, batch_size=self.batch_size`, as one can see it is missing batched argument, which means that the iterator fallsback to default constructor value, which in this case is `False`.
To remedy the problem we can change this line to
`self.ex_iterable.shuffle_data_sources(seed), function=self.function, batched=self.batched, batch_size=self.batch_size`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2716/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2716/timeline | null | completed | null | null | false | [
"Hi :) Good catch ! Feel free to open a PR if you want to contribute, this would be very welcome ;)",
"Have raised the PR [here](https://github.com/huggingface/datasets/pull/2717)",
"Fixed by #2717."
] |
https://api.github.com/repos/huggingface/datasets/issues/2268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2268/comments | https://api.github.com/repos/huggingface/datasets/issues/2268/events | https://github.com/huggingface/datasets/pull/2268 | 868,773,380 | MDExOlB1bGxSZXF1ZXN0NjI0MjQyODg1 | 2,268 | Don't use pyarrow 4.0.0 since it segfaults when casting a sliced ListArray of integers | [] | closed | false | null | 3 | 2021-04-27T11:58:28Z | 2021-06-12T12:44:49Z | 2021-04-27T13:43:20Z | null | This test `tests/test_table.py::test_concatenation_table_cast` segfaults with the latest update of pyarrow 4.0.0.
Setting `pyarrow<4.0.0` for now. I'll open an issue on JIRA once I know more about the origin of the issue | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2268/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2268/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2268.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2268",
"merged_at": "2021-04-27T13:43:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2268.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2268"
} | true | [
"@lhoestq note that the segfault also occurs on Linux.",
"Created the ticket at\r\nhttps://issues.apache.org/jira/browse/ARROW-12568",
"@lhoestq the ticket you mentioned is now in state resolved. Pyarrow supports AArch64 after version 4.0.0. Because of this restriction `datasets` is not installing in AArch64 systems."
] |
https://api.github.com/repos/huggingface/datasets/issues/373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/373/comments | https://api.github.com/repos/huggingface/datasets/issues/373/events | https://github.com/huggingface/datasets/issues/373 | 654,845,133 | MDU6SXNzdWU2NTQ4NDUxMzM= | 373 | Segmentation fault when loading local JSON dataset as of #372 | [] | closed | false | null | 11 | 2020-07-10T15:04:25Z | 2022-10-04T18:05:47Z | 2022-10-04T18:05:47Z | null | The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault.
```
dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data')
```
causes
```
Using custom data configuration default
Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/XXX/.cache/huggingface/datasets/json/default/0.0.0...
0 tables [00:00, ? tables/s]Segmentation fault (core dumped)
```
where `./datasets/train-v2.0.json` is downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/.
This is consistent with other SQuAD-formatted JSON files.
When attempting to load the dataset again, I get the following:
```
Using custom data configuration default
Traceback (most recent call last):
File "dataloader.py", line 6, in <module>
'json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data')
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset
save_infos=save_infos,
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 382, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/XXX/.conda/envs/torch/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 368, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/XXX/.conda/envs/torch/lib/python3.7/os.py", line 223, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/XXX/.cache/huggingface/datasets/json/default/0.0.0.incomplete'
```
(Not sure if you wanted this in the previous issue #369 or not as it was closed.) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/373/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/373/timeline | null | completed | null | null | false | [
"I've seen this sort of thing before -- it might help to delete the directory -- I've also noticed that there is an error with the json Dataloader for any data I've tried to load. I've replaced it with this, which skips over the data feature population step:\r\n\r\n\r\n```python\r\nimport os\r\n\r\nimport pyarrow.json as paj\r\n\r\nimport nlp as hf_nlp\r\n\r\nfrom nlp import DatasetInfo, BuilderConfig, SplitGenerator, Split, utils\r\nfrom nlp.arrow_writer import ArrowWriter\r\n\r\n\r\nclass JSONDatasetBuilder(hf_nlp.ArrowBasedBuilder):\r\n BUILDER_CONFIG_CLASS = BuilderConfig\r\n\r\n def _info(self):\r\n return DatasetInfo()\r\n\r\n def _split_generators(self, dl_manager):\r\n \"\"\" We handle string, list and dicts in datafiles\r\n \"\"\"\r\n if isinstance(self.config.data_files, (str, list, tuple)):\r\n files = self.config.data_files\r\n if isinstance(files, str):\r\n files = [files]\r\n return [SplitGenerator(name=Split.TRAIN, gen_kwargs={\"files\": files})]\r\n splits = []\r\n for split_name in [Split.TRAIN, Split.VALIDATION, Split.TEST]:\r\n if split_name in self.config.data_files:\r\n files = self.config.data_files[split_name]\r\n if isinstance(files, str):\r\n files = [files]\r\n splits.append(SplitGenerator(name=split_name, gen_kwargs={\"files\": files}))\r\n return splits\r\n\r\n def _prepare_split(self, split_generator):\r\n fname = \"{}-{}.arrow\".format(self.name, split_generator.name)\r\n fpath = os.path.join(self._cache_dir, fname)\r\n\r\n writer = ArrowWriter(path=fpath)\r\n\r\n generator = self._generate_tables(**split_generator.gen_kwargs)\r\n for key, table in utils.tqdm(generator, unit=\" tables\", leave=False):\r\n writer.write_table(table)\r\n num_examples, num_bytes = writer.finalize()\r\n\r\n split_generator.split_info.num_examples = num_examples\r\n split_generator.split_info.num_bytes = num_bytes\r\n\r\n def _generate_tables(self, files):\r\n for i, file in enumerate(files):\r\n pa_table = paj.read_json(\r\n file\r\n )\r\n yield i, pa_table\r\n\r\n```",
"Yes, deleting the directory solves the error whenever I try to rerun.\r\n\r\nBy replacing the json-loader, you mean the cached file in my `site-packages` directory? e.g. `/home/XXX/.cache/lib/python3.7/site-packages/nlp/datasets/json/(...)/json.py` \r\n\r\nWhen I was testing this out before the #372 PR was merged I had issues installing it properly locally. Since the `json.py` script was downloaded instead of actually using the one provided in the local install. Manually updating that file seemed to solve it, but it didn't seem like a proper solution. Especially when having to run this on a remote compute cluster with no access to that directory.",
"I see, diving in the JSON file for SQuAD it's a pretty complex structure.\r\n\r\nThe best solution for you, if you have a dataset really similar to SQuAD would be to copy and modify the SQuAD data processing script. We will probably add soon an option to be able to specify file path to use instead of the automatic URL encoded in the script but in the meantime you can:\r\n- copy the [squad script](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) in a new script for your dataset\r\n- in the new script replace [these `urls_to_download `](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py#L99-L102) by `urls_to_download=self.config.data_files`\r\n- load the dataset with `dataset = load_dataset('path/to/your/new/script', data_files={nlp.Split.TRAIN: \"./datasets/train-v2.0.json\"})`\r\n\r\nThis way you can reuse all the processing logic of the SQuAD loading script.",
"This seems like a more sensible solution! Thanks, @thomwolf. It's been a little daunting to understand what these scripts actually do, due to the level of abstraction and central documentation.\r\n\r\nAm I correct in assuming that the `_generate_examples()` function is the actual procedure for how the data is loaded from file? Meaning that essentially with a file containing another format, that is the only function that requires re-implementation? I'm working with a lot of datasets that, due to licensing and privacy, cannot be published. As this library is so neatly integrated with the transformers library and gives easy access to public sets such as SQUAD and increased performance, it is very neat to be able to load my private sets as well. As of now, I have just been working on scripts for translating all my data into the SQUAD-format before using the json script, but I see that it might not be necessary after all. ",
"Yes `_generate_examples()` is the main entry point. If you change the shape of the returned dictionary you also need to update the `features` in the `_info`.\r\n\r\nI'm currently writing the doc so it should be easier soon to use the library and know how to add your datasets.\r\n",
"Could you try to update pyarrow to >=0.17.0 @vegarab ?\r\nI don't have any segmentation fault with my version of pyarrow (0.17.1)\r\n\r\nI tested with\r\n```python\r\nimport nlp\r\ns = nlp.load_dataset(\"json\", data_files=\"train-v2.0.json\", field=\"data\", split=\"train\")\r\ns[0]\r\n# {'title': 'Normans', 'paragraphs': [{'qas': [{'question': 'In what country is Normandy located?', 'id':...\r\n```",
"Also if you want to have your own dataset script, we now have a new documentation !\r\nSee here:\r\nhttps://huggingface.co/nlp/add_dataset.html",
"@lhoestq \r\nFor some reason, I am not able to reproduce the segmentation fault, on pyarrow==0.16.0. Using the exact same environment and file.\r\n\r\nAnyhow, I discovered that pyarrow>=0.17.0 is required to read in a JSON file where the pandas structs contain lists. Otherwise, pyarrow complains when attempting to cast the struct:\r\n```py\r\nimport nlp\r\n>>> s = nlp.load_dataset(\"json\", data_files=\"datasets/train-v2.0.json\", field=\"data\", split=\"train\")\r\nUsing custom data configuration default\r\n>>> s[0]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 558, in __getitem__\r\n format_kwargs=self._format_kwargs,\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/arrow_dataset.py\", line 498, in _getitem\r\n outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict(\"list\"))\r\n File \"pyarrow/array.pxi\", line 559, in pyarrow.lib._PandasConvertible.to_pandas\r\n File \"pyarrow/table.pxi\", line 1367, in pyarrow.lib.Table._to_pandas\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/pyarrow/pandas_compat.py\", line 766, in table_to_blockmanager\r\n blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes)\r\n File \"/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/pyarrow/pandas_compat.py\", line 1101, in _table_to_blocks\r\n list(extension_columns.keys()))\r\n File \"pyarrow/table.pxi\", line 881, in pyarrow.lib.table_to_blocks\r\n File \"pyarrow/error.pxi\", line 105, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>\r\n>>> s\r\nDataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 35)\r\n```\r\n\r\nUpgrading to >=0.17.0 provides the same dataset structure, but accessing the records is possible without the same exception. \r\n\r\n",
"Very happy to see some extended documentation! ",
"#376 seems to be reporting the same issue as mentioned above. ",
"This issue helped me a lot, thanks.\r\nHope this issue will be fixed soon."
] |
https://api.github.com/repos/huggingface/datasets/issues/3161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3161/comments | https://api.github.com/repos/huggingface/datasets/issues/3161/events | https://github.com/huggingface/datasets/pull/3161 | 1,035,444,292 | PR_kwDODunzps4tpCsm | 3,161 | Add riddle_sense dataset | [] | closed | false | null | 2 | 2021-10-25T18:30:56Z | 2021-11-04T14:01:15Z | 2021-11-04T14:01:15Z | null | Adding a new dataset for QA with riddles. I'm confused about the tagging process because it looks like the streamlit app loads data from the current repo, so is it something that should be done after merging or off my fork? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3161/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3161.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3161",
"merged_at": "2021-11-04T14:01:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3161.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3161"
} | true | [
"@lhoestq \r\nI address all the comments, I think. Thanks! \r\n",
"The five test fails are unrelated to this PR and fixed on master so we can ignore them"
] |
https://api.github.com/repos/huggingface/datasets/issues/3096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3096/comments | https://api.github.com/repos/huggingface/datasets/issues/3096/events | https://github.com/huggingface/datasets/pull/3096 | 1,027,535,685 | PR_kwDODunzps4tQblQ | 3,096 | Fix Audio feature mp3 resampling | [] | closed | false | null | 0 | 2021-10-15T15:05:19Z | 2021-10-15T15:38:30Z | 2021-10-15T15:38:30Z | null | Issue #3095 is related to mp3 resampling, not to `cast_column`.
This PR fixes Audio feature mp3 resampling.
Fix #3095. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3096/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3096/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3096.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3096",
"merged_at": "2021-10-15T15:38:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3096.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3096"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2089/comments | https://api.github.com/repos/huggingface/datasets/issues/2089/events | https://github.com/huggingface/datasets/issues/2089 | 836,788,019 | MDU6SXNzdWU4MzY3ODgwMTk= | 2,089 | Add documentaton for dataset README.md files | [] | closed | false | null | 8 | 2021-03-20T11:44:38Z | 2023-07-25T16:45:38Z | 2023-07-25T16:45:37Z | null | Hi,
the dataset README files have special headers.
Somehow a documenation of the allowed values and tags is missing.
Could you add that?
Just to give some concrete questions that should be answered imo:
- which values can be passted to multilinguality?
- what should be passed to language_creators?
- which values should licenses have? What do I say when it is a custom license? Should I add a link?
- how should I choose size_categories ? What are valid ranges?
- what are valid task_categories?
Thanks
Philip | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2089/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2089/timeline | null | completed | null | null | false | [
"Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add.\r\n\r\nWe are also adding the full list of tags in #2107 \r\nThis covers multilinguality, language_creators, licenses, size_categories and task_categories.\r\n\r\nIn general if you want to add a tag that doesn't exist (for example for a custom license) you must make it start with `other-` and then a custom tag name.\r\n\r\nedit (@theo-m) if you ever find yourself resorting to adding an `other-*` tag, please do ping us somewhere so we can think about adding it to the \"official\" list :)",
"@lhoestq hmm - ok thanks for the answer.\r\nTo be honest I am not sure if this issue can be closed now.\r\nI just wanted to point out that this should either be documented or linked in the documentation.\r\nIf you feel like it is (will be) please just close this.",
"We're still working on the validation+documentation in this.\r\nFeel free to keep this issue open till we've added them",
"@lhoestq what is the status on this? Did you add documentation?",
"Hi ! There's the tagging app at https://huggingface.co/datasets/tagging/ that you can use.\r\nIt shows the list of all the tags you can use.\r\n\r\nIt is based on all the tag sets defined in this folder:\r\nhttps://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources",
"@lhoestq is there something like this form Models?",
"I don't think so. Feel free to take a look at the tags of other models (example [here](https://huggingface.co/bert-base-uncased/blob/main/README.md)). But we should definitely have some docs or an app to write the tags. Feel free to open an issue in the `transformers` repo or in the `huggingface_hub` repo so we can discuss this",
"When modifying a README file, the Hub now displays a special UI with allowed values (see https://huggingface.co/docs/datasets/main/en/upload_dataset#create-a-dataset-card)."
] |
https://api.github.com/repos/huggingface/datasets/issues/6015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6015/comments | https://api.github.com/repos/huggingface/datasets/issues/6015/events | https://github.com/huggingface/datasets/pull/6015 | 1,798,807,893 | PR_kwDODunzps5VMhgB | 6,015 | Add metadata ui screenshot in docs | [] | closed | false | null | 3 | 2023-07-11T12:16:29Z | 2023-07-11T16:07:28Z | 2023-07-11T15:56:46Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6015/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6015/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6015.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6015",
"merged_at": "2023-07-11T15:56:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6015.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6015"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007633 / 0.011353 (-0.003720) | 0.004666 / 0.011008 (-0.006343) | 0.097768 / 0.038508 (0.059260) | 0.085153 / 0.023109 (0.062044) | 0.400315 / 0.275898 (0.124417) | 0.452903 / 0.323480 (0.129423) | 0.006227 / 0.007986 (-0.001759) | 0.003814 / 0.004328 (-0.000515) | 0.074586 / 0.004250 (0.070336) | 0.064295 / 0.037052 (0.027242) | 0.408082 / 0.258489 (0.149593) | 0.446921 / 0.293841 (0.153080) | 0.034593 / 0.128546 (-0.093953) | 0.009191 / 0.075646 (-0.066456) | 0.337099 / 0.419271 (-0.082173) | 0.075320 / 0.043533 (0.031787) | 0.403488 / 0.255139 (0.148349) | 0.435309 / 0.283200 (0.152109) | 0.035675 / 0.141683 (-0.106008) | 1.732642 / 1.452155 (0.280487) | 1.770238 / 1.492716 (0.277522) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235879 / 0.018006 (0.217873) | 0.500330 / 0.000490 (0.499841) | 0.005221 / 0.000200 (0.005021) | 0.000150 / 0.000054 (0.000096) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032479 / 0.037411 (-0.004933) | 0.095873 / 0.014526 (0.081348) | 0.107118 / 0.176557 (-0.069438) | 0.173809 / 0.737135 (-0.563326) | 0.109832 / 0.296338 (-0.186507) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.444342 / 0.215209 (0.229133) | 4.459010 / 2.077655 (2.381355) | 2.209687 / 1.504120 (0.705567) | 2.007556 / 1.541195 (0.466362) | 2.113683 / 1.468490 (0.645193) | 0.544281 / 4.584777 (-4.040496) | 4.037151 / 3.745712 (0.291439) | 4.852644 / 5.269862 (-0.417217) | 3.134126 / 4.565676 (-1.431550) | 0.066815 / 0.424275 (-0.357460) | 0.008836 / 0.007607 (0.001229) | 0.560904 / 0.226044 (0.334859) | 5.302760 / 2.268929 (3.033832) | 2.750182 / 55.444624 (-52.694442) | 2.322595 / 6.876477 (-4.553882) | 2.547486 / 2.142072 (0.405414) | 0.665766 / 4.805227 (-4.139461) | 0.151613 / 6.500664 (-6.349051) | 0.071155 / 0.075469 (-0.004314) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.473717 / 1.841788 (-0.368071) | 22.584179 / 8.074308 (14.509871) | 15.888001 / 10.191392 (5.696609) | 0.181073 / 0.680424 (-0.499351) | 0.021395 / 0.534201 (-0.512806) | 0.452693 / 0.579283 (-0.126590) | 0.447709 / 0.434364 (0.013345) | 0.529599 / 0.540337 (-0.010738) | 0.699241 / 1.386936 (-0.687695) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007917 / 0.011353 (-0.003436) | 0.004544 / 0.011008 (-0.006464) | 0.074566 / 0.038508 (0.036058) | 0.087530 / 0.023109 (0.064421) | 0.419753 / 0.275898 (0.143854) | 0.452352 / 0.323480 (0.128872) | 0.005882 / 0.007986 (-0.002104) | 0.003904 / 0.004328 (-0.000425) | 0.073539 / 0.004250 (0.069289) | 0.071320 / 0.037052 (0.034267) | 0.432899 / 0.258489 (0.174409) | 0.470365 / 0.293841 (0.176524) | 0.036198 / 0.128546 (-0.092348) | 0.009342 / 0.075646 (-0.066304) | 0.080970 / 0.419271 (-0.338301) | 0.058769 / 0.043533 (0.015236) | 0.413397 / 0.255139 (0.158258) | 0.448362 / 0.283200 (0.165162) | 0.034177 / 0.141683 (-0.107506) | 1.706217 / 1.452155 (0.254063) | 1.776743 / 1.492716 (0.284026) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198779 / 0.018006 (0.180773) | 0.499862 / 0.000490 (0.499372) | 0.003891 / 0.000200 (0.003692) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034671 / 0.037411 (-0.002740) | 0.103165 / 0.014526 (0.088639) | 0.115813 / 0.176557 (-0.060744) | 0.177407 / 0.737135 (-0.559728) | 0.117733 / 0.296338 (-0.178606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.476859 / 0.215209 (0.261650) | 4.823063 / 2.077655 (2.745409) | 2.524133 / 1.504120 (1.020013) | 2.374482 / 1.541195 (0.833288) | 2.518047 / 1.468490 (1.049557) | 0.559131 / 4.584777 (-4.025646) | 4.126213 / 3.745712 (0.380501) | 6.488570 / 5.269862 (1.218708) | 3.816540 / 4.565676 (-0.749137) | 0.064742 / 0.424275 (-0.359533) | 0.008476 / 0.007607 (0.000869) | 0.576432 / 0.226044 (0.350387) | 5.835133 / 2.268929 (3.566205) | 3.237833 / 55.444624 (-52.206791) | 2.726596 / 6.876477 (-4.149880) | 2.799212 / 2.142072 (0.657139) | 0.661628 / 4.805227 (-4.143599) | 0.153997 / 6.500664 (-6.346667) | 0.070621 / 0.075469 (-0.004848) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.648505 / 1.841788 (-0.193282) | 22.454019 / 8.074308 (14.379711) | 16.077098 / 10.191392 (5.885706) | 0.217875 / 0.680424 (-0.462549) | 0.021285 / 0.534201 (-0.512916) | 0.459837 / 0.579283 (-0.119446) | 0.476211 / 0.434364 (0.041847) | 0.525903 / 0.540337 (-0.014435) | 0.717224 / 1.386936 (-0.669712) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008929 / 0.011353 (-0.002424) | 0.004188 / 0.011008 (-0.006820) | 0.097030 / 0.038508 (0.058522) | 0.071363 / 0.023109 (0.048254) | 0.333116 / 0.275898 (0.057218) | 0.371272 / 0.323480 (0.047792) | 0.006430 / 0.007986 (-0.001555) | 0.003689 / 0.004328 (-0.000639) | 0.068666 / 0.004250 (0.064416) | 0.057562 / 0.037052 (0.020510) | 0.347208 / 0.258489 (0.088719) | 0.390514 / 0.293841 (0.096673) | 0.050560 / 0.128546 (-0.077987) | 0.013372 / 0.075646 (-0.062275) | 0.311345 / 0.419271 (-0.107927) | 0.068990 / 0.043533 (0.025457) | 0.363026 / 0.255139 (0.107887) | 0.379793 / 0.283200 (0.096593) | 0.036891 / 0.141683 (-0.104792) | 1.583481 / 1.452155 (0.131327) | 1.688727 / 1.492716 (0.196011) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.209777 / 0.018006 (0.191771) | 0.507267 / 0.000490 (0.506777) | 0.003637 / 0.000200 (0.003438) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029309 / 0.037411 (-0.008102) | 0.088386 / 0.014526 (0.073861) | 0.104974 / 0.176557 (-0.071582) | 0.171999 / 0.737135 (-0.565137) | 0.110797 / 0.296338 (-0.185542) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.543465 / 0.215209 (0.328256) | 5.361491 / 2.077655 (3.283836) | 2.348712 / 1.504120 (0.844592) | 2.012527 / 1.541195 (0.471332) | 2.069776 / 1.468490 (0.601286) | 0.874262 / 4.584777 (-3.710515) | 4.877317 / 3.745712 (1.131605) | 5.327459 / 5.269862 (0.057597) | 3.336823 / 4.565676 (-1.228854) | 0.100456 / 0.424275 (-0.323819) | 0.008503 / 0.007607 (0.000895) | 0.692009 / 0.226044 (0.465965) | 6.912731 / 2.268929 (4.643802) | 3.110548 / 55.444624 (-52.334076) | 2.443665 / 6.876477 (-4.432811) | 2.528713 / 2.142072 (0.386641) | 1.076358 / 4.805227 (-3.728869) | 0.220352 / 6.500664 (-6.280312) | 0.080293 / 0.075469 (0.004824) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.538444 / 1.841788 (-0.303344) | 21.121221 / 8.074308 (13.046913) | 19.810609 / 10.191392 (9.619216) | 0.225406 / 0.680424 (-0.455018) | 0.026652 / 0.534201 (-0.507549) | 0.430372 / 0.579283 (-0.148911) | 0.510722 / 0.434364 (0.076358) | 0.514347 / 0.540337 (-0.025991) | 0.686050 / 1.386936 (-0.700886) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007675 / 0.011353 (-0.003678) | 0.004542 / 0.011008 (-0.006466) | 0.069655 / 0.038508 (0.031147) | 0.069338 / 0.023109 (0.046229) | 0.436505 / 0.275898 (0.160607) | 0.481806 / 0.323480 (0.158326) | 0.005315 / 0.007986 (-0.002670) | 0.004455 / 0.004328 (0.000127) | 0.072674 / 0.004250 (0.068424) | 0.058088 / 0.037052 (0.021035) | 0.445825 / 0.258489 (0.187336) | 0.501706 / 0.293841 (0.207865) | 0.047123 / 0.128546 (-0.081424) | 0.012943 / 0.075646 (-0.062703) | 0.093491 / 0.419271 (-0.325780) | 0.060169 / 0.043533 (0.016637) | 0.436530 / 0.255139 (0.181391) | 0.466873 / 0.283200 (0.183674) | 0.040453 / 0.141683 (-0.101230) | 1.586438 / 1.452155 (0.134283) | 1.671081 / 1.492716 (0.178365) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180607 / 0.018006 (0.162601) | 0.520145 / 0.000490 (0.519655) | 0.004824 / 0.000200 (0.004624) | 0.000116 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029308 / 0.037411 (-0.008103) | 0.093652 / 0.014526 (0.079126) | 0.102332 / 0.176557 (-0.074224) | 0.162414 / 0.737135 (-0.574721) | 0.098017 / 0.296338 (-0.198321) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.583949 / 0.215209 (0.368740) | 6.035191 / 2.077655 (3.957536) | 2.801274 / 1.504120 (1.297155) | 2.566150 / 1.541195 (1.024955) | 2.437122 / 1.468490 (0.968632) | 0.865038 / 4.584777 (-3.719739) | 4.841727 / 3.745712 (1.096015) | 4.683919 / 5.269862 (-0.585943) | 2.941240 / 4.565676 (-1.624437) | 0.104888 / 0.424275 (-0.319387) | 0.007747 / 0.007607 (0.000140) | 0.780041 / 0.226044 (0.553997) | 7.771314 / 2.268929 (5.502385) | 3.680814 / 55.444624 (-51.763811) | 2.938472 / 6.876477 (-3.938004) | 2.981740 / 2.142072 (0.839668) | 1.065411 / 4.805227 (-3.739816) | 0.222265 / 6.500664 (-6.278399) | 0.082428 / 0.075469 (0.006959) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.626774 / 1.841788 (-0.215014) | 21.618284 / 8.074308 (13.543976) | 20.596743 / 10.191392 (10.405351) | 0.240969 / 0.680424 (-0.439454) | 0.025630 / 0.534201 (-0.508570) | 0.481981 / 0.579283 (-0.097302) | 0.547914 / 0.434364 (0.113550) | 0.522296 / 0.540337 (-0.018041) | 0.729174 / 1.386936 (-0.657762) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/90 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/90/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/90/comments | https://api.github.com/repos/huggingface/datasets/issues/90/events | https://github.com/huggingface/datasets/pull/90 | 617,311,877 | MDExOlB1bGxSZXF1ZXN0NDE3MjUxODE0 | 90 | Add download gg drive | [] | closed | false | null | 2 | 2020-05-13T09:56:02Z | 2020-05-13T12:46:28Z | 2020-05-13T10:05:31Z | null | We can now add datasets that download from google drive | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/90/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/90/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/90.diff",
"html_url": "https://github.com/huggingface/datasets/pull/90",
"merged_at": "2020-05-13T10:05:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/90.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/90"
} | true | [
"awesome - so no manual downloaded needed here? ",
"Yes exactly. It works like a standard download"
] |
https://api.github.com/repos/huggingface/datasets/issues/5640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5640/comments | https://api.github.com/repos/huggingface/datasets/issues/5640/events | https://github.com/huggingface/datasets/pull/5640 | 1,625,896,057 | PR_kwDODunzps5MID3I | 5,640 | Less zip false positives | [] | closed | false | null | 6 | 2023-03-15T16:48:59Z | 2023-03-16T13:47:37Z | 2023-03-16T13:40:12Z | null | `zipfile.is_zipfile` return false positives for some Parquet files. It causes errors when loading certain parquet datasets, where some files are considered ZIP files by `zipfile.is_zipfile`
This is a known issue: https://github.com/python/cpython/issues/72680
At first I wanted to rely only on magic numbers, but then I found that someone contributed a [fix to is_zipfile](https://github.com/python/cpython/pull/5053) - do you think we should use it @albertvillanova or not ?
IMO it's ok to rely on magic numbers only for now, since in streaming mode we've had no issue checking only the magic number so far.
Close https://github.com/huggingface/datasets/issues/5639 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5640/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5640/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5640.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5640",
"merged_at": "2023-03-16T13:40:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5640.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5640"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006998 / 0.011353 (-0.004355) | 0.005093 / 0.011008 (-0.005916) | 0.100490 / 0.038508 (0.061982) | 0.032736 / 0.023109 (0.009627) | 0.297738 / 0.275898 (0.021840) | 0.322255 / 0.323480 (-0.001225) | 0.005583 / 0.007986 (-0.002402) | 0.004007 / 0.004328 (-0.000321) | 0.075863 / 0.004250 (0.071613) | 0.044212 / 0.037052 (0.007159) | 0.300033 / 0.258489 (0.041544) | 0.341997 / 0.293841 (0.048156) | 0.036172 / 0.128546 (-0.092374) | 0.012176 / 0.075646 (-0.063471) | 0.356052 / 0.419271 (-0.063220) | 0.050438 / 0.043533 (0.006905) | 0.294677 / 0.255139 (0.039538) | 0.318050 / 0.283200 (0.034850) | 0.104733 / 0.141683 (-0.036950) | 1.435681 / 1.452155 (-0.016474) | 1.534793 / 1.492716 (0.042076) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242815 / 0.018006 (0.224809) | 0.565983 / 0.000490 (0.565494) | 0.006800 / 0.000200 (0.006600) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026548 / 0.037411 (-0.010863) | 0.104816 / 0.014526 (0.090290) | 0.116222 / 0.176557 (-0.060335) | 0.172143 / 0.737135 (-0.564992) | 0.121631 / 0.296338 (-0.174707) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400126 / 0.215209 (0.184917) | 4.004538 / 2.077655 (1.926883) | 1.798822 / 1.504120 (0.294702) | 1.595191 / 1.541195 (0.053996) | 1.645777 / 1.468490 (0.177287) | 0.705643 / 4.584777 (-3.879134) | 3.750887 / 3.745712 (0.005175) | 2.136547 / 5.269862 (-3.133315) | 1.475881 / 4.565676 (-3.089795) | 0.086921 / 0.424275 (-0.337354) | 0.012379 / 0.007607 (0.004771) | 0.505824 / 0.226044 (0.279779) | 5.052364 / 2.268929 (2.783435) | 2.279983 / 55.444624 (-53.164641) | 1.932253 / 6.876477 (-4.944224) | 2.051359 / 2.142072 (-0.090714) | 0.851906 / 4.805227 (-3.953321) | 0.169566 / 6.500664 (-6.331098) | 0.064600 / 0.075469 (-0.010869) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.165859 / 1.841788 (-0.675929) | 15.049950 / 8.074308 (6.975642) | 14.095981 / 10.191392 (3.904589) | 0.151779 / 0.680424 (-0.528645) | 0.017537 / 0.534201 (-0.516664) | 0.420164 / 0.579283 (-0.159119) | 0.418932 / 0.434364 (-0.015432) | 0.488749 / 0.540337 (-0.051588) | 0.582359 / 1.386936 (-0.804577) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007426 / 0.011353 (-0.003927) | 0.005248 / 0.011008 (-0.005761) | 0.074118 / 0.038508 (0.035610) | 0.034223 / 0.023109 (0.011114) | 0.337780 / 0.275898 (0.061882) | 0.376300 / 0.323480 (0.052820) | 0.006142 / 0.007986 (-0.001843) | 0.004246 / 0.004328 (-0.000083) | 0.074177 / 0.004250 (0.069926) | 0.052698 / 0.037052 (0.015646) | 0.340229 / 0.258489 (0.081740) | 0.396172 / 0.293841 (0.102331) | 0.037293 / 0.128546 (-0.091253) | 0.012514 / 0.075646 (-0.063132) | 0.087144 / 0.419271 (-0.332128) | 0.051922 / 0.043533 (0.008390) | 0.333188 / 0.255139 (0.078049) | 0.355420 / 0.283200 (0.072220) | 0.110273 / 0.141683 (-0.031410) | 1.447826 / 1.452155 (-0.004329) | 1.561135 / 1.492716 (0.068419) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.269203 / 0.018006 (0.251197) | 0.551997 / 0.000490 (0.551508) | 0.001558 / 0.000200 (0.001359) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029511 / 0.037411 (-0.007900) | 0.108614 / 0.014526 (0.094089) | 0.123438 / 0.176557 (-0.053118) | 0.171596 / 0.737135 (-0.565539) | 0.126828 / 0.296338 (-0.169511) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420520 / 0.215209 (0.205310) | 4.175672 / 2.077655 (2.098017) | 1.982220 / 1.504120 (0.478101) | 1.788575 / 1.541195 (0.247381) | 1.860840 / 1.468490 (0.392349) | 0.706730 / 4.584777 (-3.878047) | 3.858718 / 3.745712 (0.113005) | 3.069389 / 5.269862 (-2.200472) | 1.827603 / 4.565676 (-2.738073) | 0.087893 / 0.424275 (-0.336382) | 0.012613 / 0.007607 (0.005006) | 0.524177 / 0.226044 (0.298132) | 5.177077 / 2.268929 (2.908148) | 2.494397 / 55.444624 (-52.950227) | 2.189484 / 6.876477 (-4.686992) | 2.217626 / 2.142072 (0.075554) | 0.846326 / 4.805227 (-3.958901) | 0.176558 / 6.500664 (-6.324106) | 0.065018 / 0.075469 (-0.010451) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.268618 / 1.841788 (-0.573170) | 15.132711 / 8.074308 (7.058403) | 14.585530 / 10.191392 (4.394138) | 0.163454 / 0.680424 (-0.516970) | 0.017442 / 0.534201 (-0.516759) | 0.421746 / 0.579283 (-0.157537) | 0.425412 / 0.434364 (-0.008952) | 0.499178 / 0.540337 (-0.041159) | 0.595458 / 1.386936 (-0.791478) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007980 / 0.011353 (-0.003373) | 0.005414 / 0.011008 (-0.005594) | 0.099226 / 0.038508 (0.060718) | 0.035442 / 0.023109 (0.012332) | 0.304851 / 0.275898 (0.028952) | 0.337144 / 0.323480 (0.013664) | 0.006162 / 0.007986 (-0.001823) | 0.004151 / 0.004328 (-0.000177) | 0.074708 / 0.004250 (0.070458) | 0.049690 / 0.037052 (0.012638) | 0.307658 / 0.258489 (0.049168) | 0.358472 / 0.293841 (0.064631) | 0.037181 / 0.128546 (-0.091365) | 0.012259 / 0.075646 (-0.063387) | 0.335426 / 0.419271 (-0.083846) | 0.050790 / 0.043533 (0.007257) | 0.301715 / 0.255139 (0.046576) | 0.320834 / 0.283200 (0.037634) | 0.102357 / 0.141683 (-0.039326) | 1.454750 / 1.452155 (0.002596) | 1.571994 / 1.492716 (0.079278) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.218708 / 0.018006 (0.200702) | 0.444391 / 0.000490 (0.443901) | 0.005717 / 0.000200 (0.005517) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028017 / 0.037411 (-0.009395) | 0.112753 / 0.014526 (0.098227) | 0.121003 / 0.176557 (-0.055554) | 0.181085 / 0.737135 (-0.556050) | 0.127211 / 0.296338 (-0.169127) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.400803 / 0.215209 (0.185594) | 4.007315 / 2.077655 (1.929660) | 1.826911 / 1.504120 (0.322791) | 1.637799 / 1.541195 (0.096605) | 1.699754 / 1.468490 (0.231264) | 0.709413 / 4.584777 (-3.875364) | 4.008904 / 3.745712 (0.263192) | 3.916540 / 5.269862 (-1.353322) | 1.902102 / 4.565676 (-2.663575) | 0.089048 / 0.424275 (-0.335227) | 0.012763 / 0.007607 (0.005155) | 0.498957 / 0.226044 (0.272913) | 4.979865 / 2.268929 (2.710937) | 2.301987 / 55.444624 (-53.142637) | 1.929404 / 6.876477 (-4.947073) | 2.107839 / 2.142072 (-0.034233) | 0.857253 / 4.805227 (-3.947974) | 0.171935 / 6.500664 (-6.328729) | 0.066753 / 0.075469 (-0.008716) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.186811 / 1.841788 (-0.654977) | 15.866319 / 8.074308 (7.792011) | 14.738555 / 10.191392 (4.547163) | 0.142879 / 0.680424 (-0.537544) | 0.017679 / 0.534201 (-0.516522) | 0.422840 / 0.579283 (-0.156443) | 0.450307 / 0.434364 (0.015943) | 0.491802 / 0.540337 (-0.048536) | 0.588837 / 1.386936 (-0.798099) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007659 / 0.011353 (-0.003694) | 0.005331 / 0.011008 (-0.005678) | 0.075360 / 0.038508 (0.036852) | 0.034011 / 0.023109 (0.010902) | 0.354488 / 0.275898 (0.078590) | 0.401781 / 0.323480 (0.078301) | 0.005806 / 0.007986 (-0.002179) | 0.004029 / 0.004328 (-0.000300) | 0.073822 / 0.004250 (0.069572) | 0.049067 / 0.037052 (0.012015) | 0.364483 / 0.258489 (0.105994) | 0.405637 / 0.293841 (0.111796) | 0.037166 / 0.128546 (-0.091380) | 0.012397 / 0.075646 (-0.063249) | 0.087346 / 0.419271 (-0.331926) | 0.050888 / 0.043533 (0.007355) | 0.334796 / 0.255139 (0.079657) | 0.387681 / 0.283200 (0.104481) | 0.105056 / 0.141683 (-0.036627) | 1.471630 / 1.452155 (0.019475) | 1.554764 / 1.492716 (0.062047) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.231825 / 0.018006 (0.213819) | 0.449746 / 0.000490 (0.449256) | 0.000888 / 0.000200 (0.000688) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030363 / 0.037411 (-0.007049) | 0.115234 / 0.014526 (0.100708) | 0.123005 / 0.176557 (-0.053551) | 0.172772 / 0.737135 (-0.564363) | 0.127818 / 0.296338 (-0.168520) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.425761 / 0.215209 (0.210552) | 4.237950 / 2.077655 (2.160295) | 1.992045 / 1.504120 (0.487925) | 1.801622 / 1.541195 (0.260427) | 1.918477 / 1.468490 (0.449987) | 0.722730 / 4.584777 (-3.862047) | 4.015968 / 3.745712 (0.270256) | 3.720412 / 5.269862 (-1.549450) | 1.763111 / 4.565676 (-2.802566) | 0.089041 / 0.424275 (-0.335234) | 0.012608 / 0.007607 (0.005001) | 0.522645 / 0.226044 (0.296601) | 5.227108 / 2.268929 (2.958180) | 2.444714 / 55.444624 (-52.999910) | 2.109745 / 6.876477 (-4.766732) | 2.194042 / 2.142072 (0.051969) | 0.871781 / 4.805227 (-3.933447) | 0.173149 / 6.500664 (-6.327515) | 0.066192 / 0.075469 (-0.009277) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.312051 / 1.841788 (-0.529737) | 16.024315 / 8.074308 (7.950007) | 15.123823 / 10.191392 (4.932431) | 0.163997 / 0.680424 (-0.516427) | 0.017595 / 0.534201 (-0.516606) | 0.426379 / 0.579283 (-0.152904) | 0.467709 / 0.434364 (0.033345) | 0.498308 / 0.540337 (-0.042030) | 0.591426 / 1.386936 (-0.795510) |\n\n</details>\n</details>\n\n\n",
"CI is failing due to unrelated issues, hopefully https://github.com/huggingface/datasets/pull/5642 fixes it",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006478 / 0.011353 (-0.004875) | 0.004347 / 0.011008 (-0.006661) | 0.097103 / 0.038508 (0.058595) | 0.027650 / 0.023109 (0.004541) | 0.372355 / 0.275898 (0.096457) | 0.408794 / 0.323480 (0.085314) | 0.005034 / 0.007986 (-0.002952) | 0.003252 / 0.004328 (-0.001076) | 0.074068 / 0.004250 (0.069818) | 0.035542 / 0.037052 (-0.001510) | 0.367392 / 0.258489 (0.108903) | 0.409644 / 0.293841 (0.115803) | 0.031745 / 0.128546 (-0.096801) | 0.011501 / 0.075646 (-0.064145) | 0.323355 / 0.419271 (-0.095917) | 0.043065 / 0.043533 (-0.000467) | 0.377313 / 0.255139 (0.122174) | 0.395326 / 0.283200 (0.112127) | 0.087101 / 0.141683 (-0.054582) | 1.461228 / 1.452155 (0.009073) | 1.529413 / 1.492716 (0.036696) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199245 / 0.018006 (0.181239) | 0.409978 / 0.000490 (0.409488) | 0.002655 / 0.000200 (0.002455) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023903 / 0.037411 (-0.013508) | 0.097855 / 0.014526 (0.083330) | 0.106405 / 0.176557 (-0.070152) | 0.166889 / 0.737135 (-0.570247) | 0.110256 / 0.296338 (-0.186082) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440351 / 0.215209 (0.225142) | 4.382848 / 2.077655 (2.305194) | 2.049602 / 1.504120 (0.545482) | 1.824638 / 1.541195 (0.283443) | 1.850519 / 1.468490 (0.382029) | 0.702652 / 4.584777 (-3.882125) | 3.394571 / 3.745712 (-0.351141) | 1.940608 / 5.269862 (-3.329254) | 1.263961 / 4.565676 (-3.301716) | 0.083985 / 0.424275 (-0.340290) | 0.013046 / 0.007607 (0.005439) | 0.538272 / 0.226044 (0.312228) | 5.407563 / 2.268929 (3.138634) | 2.519207 / 55.444624 (-52.925418) | 2.153379 / 6.876477 (-4.723098) | 2.394512 / 2.142072 (0.252439) | 0.812840 / 4.805227 (-3.992387) | 0.152868 / 6.500664 (-6.347796) | 0.067823 / 0.075469 (-0.007646) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.220031 / 1.841788 (-0.621757) | 13.781237 / 8.074308 (5.706929) | 14.203975 / 10.191392 (4.012583) | 0.141077 / 0.680424 (-0.539347) | 0.016518 / 0.534201 (-0.517682) | 0.379079 / 0.579283 (-0.200204) | 0.378916 / 0.434364 (-0.055448) | 0.434589 / 0.540337 (-0.105749) | 0.521129 / 1.386936 (-0.865807) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006997 / 0.011353 (-0.004356) | 0.004599 / 0.011008 (-0.006410) | 0.078700 / 0.038508 (0.040192) | 0.027902 / 0.023109 (0.004793) | 0.344406 / 0.275898 (0.068508) | 0.392918 / 0.323480 (0.069438) | 0.005175 / 0.007986 (-0.002811) | 0.004755 / 0.004328 (0.000427) | 0.077707 / 0.004250 (0.073457) | 0.039409 / 0.037052 (0.002357) | 0.343250 / 0.258489 (0.084761) | 0.405544 / 0.293841 (0.111703) | 0.032286 / 0.128546 (-0.096260) | 0.011674 / 0.075646 (-0.063972) | 0.087633 / 0.419271 (-0.331639) | 0.043346 / 0.043533 (-0.000186) | 0.355076 / 0.255139 (0.099937) | 0.382155 / 0.283200 (0.098955) | 0.090914 / 0.141683 (-0.050769) | 1.518369 / 1.452155 (0.066215) | 1.583530 / 1.492716 (0.090813) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.160369 / 0.018006 (0.142362) | 0.406844 / 0.000490 (0.406354) | 0.002651 / 0.000200 (0.002451) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025295 / 0.037411 (-0.012116) | 0.101490 / 0.014526 (0.086964) | 0.108825 / 0.176557 (-0.067732) | 0.161673 / 0.737135 (-0.575462) | 0.113610 / 0.296338 (-0.182729) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443514 / 0.215209 (0.228305) | 4.436722 / 2.077655 (2.359067) | 2.144008 / 1.504120 (0.639888) | 2.005324 / 1.541195 (0.464129) | 2.123356 / 1.468490 (0.654866) | 0.697217 / 4.584777 (-3.887560) | 3.401105 / 3.745712 (-0.344607) | 1.874621 / 5.269862 (-3.395240) | 1.165069 / 4.565676 (-3.400608) | 0.082799 / 0.424275 (-0.341476) | 0.012806 / 0.007607 (0.005199) | 0.542688 / 0.226044 (0.316644) | 5.420963 / 2.268929 (3.152034) | 2.579034 / 55.444624 (-52.865590) | 2.240201 / 6.876477 (-4.636276) | 2.261309 / 2.142072 (0.119237) | 0.800246 / 4.805227 (-4.004981) | 0.150380 / 6.500664 (-6.350285) | 0.066880 / 0.075469 (-0.008589) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.281721 / 1.841788 (-0.560067) | 13.906361 / 8.074308 (5.832053) | 14.135336 / 10.191392 (3.943944) | 0.128865 / 0.680424 (-0.551559) | 0.016452 / 0.534201 (-0.517749) | 0.373563 / 0.579283 (-0.205720) | 0.385321 / 0.434364 (-0.049043) | 0.437198 / 0.540337 (-0.103139) | 0.530720 / 1.386936 (-0.856216) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008099 / 0.011353 (-0.003254) | 0.005093 / 0.011008 (-0.005916) | 0.106258 / 0.038508 (0.067750) | 0.037051 / 0.023109 (0.013942) | 0.347960 / 0.275898 (0.072062) | 0.370849 / 0.323480 (0.047369) | 0.006122 / 0.007986 (-0.001863) | 0.004094 / 0.004328 (-0.000235) | 0.079549 / 0.004250 (0.075299) | 0.046563 / 0.037052 (0.009510) | 0.332735 / 0.258489 (0.074246) | 0.417061 / 0.293841 (0.123220) | 0.038105 / 0.128546 (-0.090441) | 0.011886 / 0.075646 (-0.063760) | 0.342103 / 0.419271 (-0.077169) | 0.053233 / 0.043533 (0.009700) | 0.344754 / 0.255139 (0.089615) | 0.355354 / 0.283200 (0.072155) | 0.101059 / 0.141683 (-0.040624) | 1.518561 / 1.452155 (0.066406) | 1.558652 / 1.492716 (0.065935) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.225919 / 0.018006 (0.207913) | 0.518539 / 0.000490 (0.518049) | 0.006230 / 0.000200 (0.006030) | 0.000124 / 0.000054 (0.000070) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026782 / 0.037411 (-0.010629) | 0.108457 / 0.014526 (0.093931) | 0.125203 / 0.176557 (-0.051353) | 0.175726 / 0.737135 (-0.561409) | 0.127051 / 0.296338 (-0.169287) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.416427 / 0.215209 (0.201217) | 4.168851 / 2.077655 (2.091196) | 1.962238 / 1.504120 (0.458118) | 1.825224 / 1.541195 (0.284029) | 1.831200 / 1.468490 (0.362710) | 0.765526 / 4.584777 (-3.819250) | 4.303957 / 3.745712 (0.558245) | 2.193467 / 5.269862 (-3.076395) | 1.654605 / 4.565676 (-2.911071) | 0.096709 / 0.424275 (-0.327566) | 0.013792 / 0.007607 (0.006185) | 0.537862 / 0.226044 (0.311818) | 5.152230 / 2.268929 (2.883302) | 2.520938 / 55.444624 (-52.923686) | 2.108422 / 6.876477 (-4.768054) | 2.214220 / 2.142072 (0.072147) | 0.834320 / 4.805227 (-3.970907) | 0.170635 / 6.500664 (-6.330029) | 0.063131 / 0.075469 (-0.012338) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.215767 / 1.841788 (-0.626020) | 15.254781 / 8.074308 (7.180473) | 14.360764 / 10.191392 (4.169372) | 0.172511 / 0.680424 (-0.507913) | 0.020161 / 0.534201 (-0.514040) | 0.426936 / 0.579283 (-0.152347) | 0.438771 / 0.434364 (0.004407) | 0.486973 / 0.540337 (-0.053364) | 0.584238 / 1.386936 (-0.802698) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006777 / 0.011353 (-0.004576) | 0.005304 / 0.011008 (-0.005704) | 0.073717 / 0.038508 (0.035209) | 0.033604 / 0.023109 (0.010494) | 0.340448 / 0.275898 (0.064550) | 0.351861 / 0.323480 (0.028381) | 0.005786 / 0.007986 (-0.002199) | 0.005013 / 0.004328 (0.000685) | 0.071263 / 0.004250 (0.067012) | 0.048189 / 0.037052 (0.011137) | 0.339457 / 0.258489 (0.080968) | 0.384383 / 0.293841 (0.090542) | 0.035563 / 0.128546 (-0.092983) | 0.011509 / 0.075646 (-0.064137) | 0.083722 / 0.419271 (-0.335550) | 0.048886 / 0.043533 (0.005353) | 0.350184 / 0.255139 (0.095045) | 0.361037 / 0.283200 (0.077837) | 0.105191 / 0.141683 (-0.036492) | 1.503247 / 1.452155 (0.051093) | 1.582298 / 1.492716 (0.089581) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221687 / 0.018006 (0.203681) | 0.466489 / 0.000490 (0.465999) | 0.000484 / 0.000200 (0.000284) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027978 / 0.037411 (-0.009434) | 0.119572 / 0.014526 (0.105047) | 0.133530 / 0.176557 (-0.043026) | 0.177892 / 0.737135 (-0.559243) | 0.127045 / 0.296338 (-0.169294) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430198 / 0.215209 (0.214989) | 4.435512 / 2.077655 (2.357858) | 2.007183 / 1.504120 (0.503063) | 1.799230 / 1.541195 (0.258036) | 1.884750 / 1.468490 (0.416260) | 0.745232 / 4.584777 (-3.839545) | 4.088069 / 3.745712 (0.342357) | 4.114669 / 5.269862 (-1.155193) | 2.374086 / 4.565676 (-2.191590) | 0.089154 / 0.424275 (-0.335121) | 0.012938 / 0.007607 (0.005331) | 0.505954 / 0.226044 (0.279909) | 5.194226 / 2.268929 (2.925298) | 2.487230 / 55.444624 (-52.957394) | 2.163353 / 6.876477 (-4.713124) | 2.177879 / 2.142072 (0.035807) | 0.828728 / 4.805227 (-3.976499) | 0.171157 / 6.500664 (-6.329507) | 0.062883 / 0.075469 (-0.012586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.275906 / 1.841788 (-0.565882) | 15.235484 / 8.074308 (7.161176) | 14.467396 / 10.191392 (4.276004) | 0.198994 / 0.680424 (-0.481430) | 0.020203 / 0.534201 (-0.513998) | 0.447904 / 0.579283 (-0.131380) | 0.454210 / 0.434364 (0.019846) | 0.528062 / 0.540337 (-0.012275) | 0.619311 / 1.386936 (-0.767625) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1362/comments | https://api.github.com/repos/huggingface/datasets/issues/1362/events | https://github.com/huggingface/datasets/pull/1362 | 760,138,233 | MDExOlB1bGxSZXF1ZXN0NTM1MDIwMDAz | 1,362 | adding opus_infopankki | [] | closed | false | null | 1 | 2020-12-09T08:57:10Z | 2020-12-09T18:16:20Z | 2020-12-09T18:13:48Z | null | Adding opus_infopankki
http://opus.nlpl.eu/infopankki-v1.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1362/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1362/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1362.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1362",
"merged_at": "2020-12-09T18:13:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1362.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1362"
} | true | [
"Thanks Quentin !"
] |
https://api.github.com/repos/huggingface/datasets/issues/4859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4859/comments | https://api.github.com/repos/huggingface/datasets/issues/4859/events | https://github.com/huggingface/datasets/issues/4859 | 1,342,231,016 | I_kwDODunzps5QANHo | 4,859 | can't install using conda on Windows 10 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 0 | 2022-08-17T19:57:37Z | 2022-08-17T19:57:37Z | null | null | ## Describe the bug
I wanted to install using conda or Anaconda navigator. That didn't work, so I had to install using pip.
## Steps to reproduce the bug
conda install -c huggingface -c conda-forge datasets
## Expected results
Should have indicated successful installation.
## Actual results
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
... took forever, so I cancelled it with ctrl-c
## Environment info
- `datasets` version: 2.4.0 # after installing with pip
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.12
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
- conda version: 4.13.0
conda info
active environment : base
active env location : G:\anaconda2022
shell level : 1
user config file : C:\Users\michael\.condarc
populated config files : C:\Users\michael\.condarc
conda version : 4.13.0
conda-build version : 3.21.8
python version : 3.9.12.final.0
virtual packages : __cuda=11.1=0
__win=0=0
__archspec=1=x86_64
base environment : G:\anaconda2022 (writable)
conda av data dir : G:\anaconda2022\etc\conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/pytorch/win-64
https://conda.anaconda.org/pytorch/noarch
https://conda.anaconda.org/huggingface/win-64
https://conda.anaconda.org/huggingface/noarch
https://conda.anaconda.org/conda-forge/win-64
https://conda.anaconda.org/conda-forge/noarch
https://conda.anaconda.org/anaconda-fusion/win-64
https://conda.anaconda.org/anaconda-fusion/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : G:\anaconda2022\pkgs
C:\Users\michael\.conda\pkgs
C:\Users\michael\AppData\Local\conda\conda\pkgs
envs directories : G:\anaconda2022\envs
C:\Users\michael\.conda\envs
C:\Users\michael\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.13.0 requests/2.27.1 CPython/3.9.12 Windows/10 Windows/10.0.19044
administrator : False
netrc file : None
offline mode : False
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4859/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4859/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/244/comments | https://api.github.com/repos/huggingface/datasets/issues/244/events | https://github.com/huggingface/datasets/pull/244 | 631,869,155 | MDExOlB1bGxSZXF1ZXN0NDI4NjgxMTcx | 244 | Add Allociné Dataset | [] | closed | false | null | 3 | 2020-06-05T19:19:26Z | 2020-06-11T07:47:26Z | 2020-06-11T07:47:26Z | null | This is a french binary sentiment classification dataset, which was used to train this model: https://huggingface.co/tblard/tf-allocine.
Basically, it's a french "IMDB" dataset, with more reviews.
More info on [this repo](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/244/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/244/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/244.diff",
"html_url": "https://github.com/huggingface/datasets/pull/244",
"merged_at": "2020-06-11T07:47:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/244.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/244"
} | true | [
"great work @TheophileBlard ",
"LGTM, thanks a lot for adding dummy data tests :-) Was it difficult to create the correct dummy data folder? ",
"It was pretty easy actually. Documentation is on point !"
] |
https://api.github.com/repos/huggingface/datasets/issues/939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/939/comments | https://api.github.com/repos/huggingface/datasets/issues/939/events | https://github.com/huggingface/datasets/pull/939 | 753,965,405 | MDExOlB1bGxSZXF1ZXN0NTI5OTQwOTYz | 939 | add wisesight_sentiment | [] | closed | false | null | 4 | 2020-12-01T03:06:39Z | 2020-12-02T04:52:38Z | 2020-12-02T04:35:51Z | null | Add `wisesight_sentiment` Social media messages in Thai language with sentiment label (positive, neutral, negative, question)
Model Card:
---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- th
licenses:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for wisesight_sentiment
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/PyThaiNLP/wisesight-sentiment
- **Repository:** https://github.com/PyThaiNLP/wisesight-sentiment
- **Paper:**
- **Leaderboard:** https://www.kaggle.com/c/wisesight-sentiment/
- **Point of Contact:** https://github.com/PyThaiNLP/
### Dataset Summary
Wisesight Sentiment Corpus: Social media messages in Thai language with sentiment label (positive, neutral, negative, question)
- Released to public domain under Creative Commons Zero v1.0 Universal license.
- Labels: {"pos": 0, "neu": 1, "neg": 2, "q": 3}
- Size: 26,737 messages
- Language: Central Thai
- Style: Informal and conversational. With some news headlines and advertisement.
- Time period: Around 2016 to early 2019. With small amount from other period.
- Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs.
- Privacy:
- Only messages that made available to the public on the internet (websites, blogs, social network sites).
- For Facebook, this means the public comments (everyone can see) that made on a public page.
- Private/protected messages and messages in groups, chat, and inbox are not included.
- Alternations and modifications:
- Keep in mind that this corpus does not statistically represent anything in the language register.
- Large amount of messages are not in their original form. Personal data are removed or masked.
- Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact.
(Mis)spellings are kept intact.
- Messages longer than 2,000 characters are removed.
- Long non-Thai messages are removed. Duplicated message (exact match) are removed.
- More characteristics of the data can be explore [this notebook](https://github.com/PyThaiNLP/wisesight-sentiment/blob/master/exploration.ipynb)
### Supported Tasks and Leaderboards
Sentiment analysis / [Kaggle Leaderboard](https://www.kaggle.com/c/wisesight-sentiment/)
### Languages
Thai
## Dataset Structure
### Data Instances
```
{'category': 'pos', 'texts': 'น่าสนนน'}
{'category': 'neu', 'texts': 'ครับ #phithanbkk'}
{'category': 'neg', 'texts': 'ซื้อแต่ผ้าอนามัยแบบเย็นมาค่ะ แบบว่าอีห่ากูนอนไม่ได้'}
{'category': 'q', 'texts': 'มีแอลกอฮอลมั้ยคะ'}
```
### Data Fields
- `texts`: texts
- `category`: sentiment of texts ranging from `pos` (positive; 0), `neu` (neutral; 1), `neg` (negative; 2) and `q` (question; 3)
### Data Splits
| | train | valid | test |
|-----------|-------|-------|-------|
| # samples | 21628 | 2404 | 2671 |
| # neu | 11795 | 1291 | 1453 |
| # neg | 5491 | 637 | 683 |
| # pos | 3866 | 434 | 478 |
| # q | 476 | 42 | 57 |
| avg words | 27.21 | 27.18 | 27.12 |
| avg chars | 89.82 | 89.50 | 90.36 |
## Dataset Creation
### Curation Rationale
Originally, the dataset was conceived for the [In-class Kaggle Competition](https://www.kaggle.com/c/wisesight-sentiment/) at Chulalongkorn university by [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University). It has since become one of the benchmarks for sentiment analysis in Thai.
### Source Data
#### Initial Data Collection and Normalization
- Style: Informal and conversational. With some news headlines and advertisement.
- Time period: Around 2016 to early 2019. With small amount from other period.
- Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs.
- Privacy:
- Only messages that made available to the public on the internet (websites, blogs, social network sites).
- For Facebook, this means the public comments (everyone can see) that made on a public page.
- Private/protected messages and messages in groups, chat, and inbox are not included.
- Usernames and non-public figure names are removed
- Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222)
- If you see any personal data still remain in the set, please tell us - so we can remove them.
- Alternations and modifications:
- Keep in mind that this corpus does not statistically represent anything in the language register.
- Large amount of messages are not in their original form. Personal data are removed or masked.
- Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact.
- (Mis)spellings are kept intact.
- Messages longer than 2,000 characters are removed.
- Long non-Thai messages are removed. Duplicated message (exact match) are removed.
#### Who are the source language producers?
Social media users in Thailand
### Annotations
#### Annotation process
- Sentiment values are assigned by human annotators.
- A human annotator put his/her best effort to assign just one label, out of four, to a message.
- Agreement, enjoyment, and satisfaction are positive. Disagreement, sadness, and disappointment are negative.
- Showing interest in a topic or in a product is counted as positive. In this sense, a question about a particular product could has a positive sentiment value, if it shows the interest in the product.
- Saying that other product or service is better is counted as negative.
- General information or news title tend to be counted as neutral.
#### Who are the annotators?
Outsourced annotators hired by [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/)
### Personal and Sensitive Information
- We trying to exclude any known personally identifiable information from this data set.
- Usernames and non-public figure names are removed
- Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222)
- If you see any personal data still remain in the set, please tell us - so we can remove them.
## Considerations for Using the Data
### Social Impact of Dataset
- `wisesight_sentiment` is the first and one of the few open datasets for sentiment analysis of social media data in Thai
- There are risks of personal information that escape the anonymization process
### Discussion of Biases
- A message can be ambiguous. When possible, the judgement will be based solely on the text itself.
- In some situation, like when the context is missing, the annotator may have to rely on his/her own world knowledge and just guess.
- In some cases, the human annotator may have an access to the message's context, like an image. These additional information are not included as part of this corpus.
### Other Known Limitations
- The labels are imbalanced; over half of the texts are `neu` (neutral) whereas there are very few `q` (question).
- Misspellings in social media texts make word tokenization process for Thai difficult, thus impacting the model performance
## Additional Information
### Dataset Curators
Thanks [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) community, [Kitsuchart Pasupa](http://www.it.kmitl.ac.th/~kitsuchart/) (Faculty of Information Technology, King Mongkut's Institute of Technology Ladkrabang), and [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University) for advice. The original Kaggle competition, using the first version of this corpus, can be found at https://www.kaggle.com/c/wisesight-sentiment/
### Licensing Information
- If applicable, copyright of each message content belongs to the original poster.
- **Annotation data (labels) are released to public domain.**
- [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/) helps facilitate the annotation, but does not necessarily agree upon the labels made by the human annotators. This annotation is for research purpose and does not reflect the professional work that Wisesight has been done for its customers.
- The human annotator does not necessarily agree or disagree with the message. Likewise, the label he/she made to the message does not necessarily reflect his/her personal view towards the message.
### Citation Information
Please cite the following if you make use of the dataset:
Arthit Suriyawongkul, Ekapol Chuangsuwanich, Pattarawat Chormai, and Charin Polpanumas. 2019. **PyThaiNLP/wisesight-sentiment: First release.** September.
BibTeX:
```
@software{bact_2019_3457447,
author = {Suriyawongkul, Arthit and
Chuangsuwanich, Ekapol and
Chormai, Pattarawat and
Polpanumas, Charin},
title = {PyThaiNLP/wisesight-sentiment: First release},
month = sep,
year = 2019,
publisher = {Zenodo},
version = {v1.0},
doi = {10.5281/zenodo.3457447},
url = {https://doi.org/10.5281/zenodo.3457447}
}
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/939/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/939.diff",
"html_url": "https://github.com/huggingface/datasets/pull/939",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/939.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/939"
} | true | [
"@lhoestq Thanks, Quentin. Removed the .ipynb_checkpoints and edited the README.md. The tests are failing because of other dataets. I'm figuring out why since the commits only have changes on `wisesight_sentiment`\r\n\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_flue\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_load_dataset_xglue\r\n```",
"@cstorm125 I really like the dataset and dataset card but there seems to have been a rebase issue at some point since it's now changing 140 files :D \r\n\r\nCould you rebase from master?",
"I think it might be faster to close and reopen.",
"To be continued on: https://github.com/huggingface/datasets/pull/981"
] |
https://api.github.com/repos/huggingface/datasets/issues/4773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4773/comments | https://api.github.com/repos/huggingface/datasets/issues/4773/events | https://github.com/huggingface/datasets/pull/4773 | 1,322,796,721 | PR_kwDODunzps48WNV3 | 4,773 | Document loading from relative path | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 5 | 2022-07-29T23:32:21Z | 2022-08-25T18:36:45Z | 2022-08-25T18:34:23Z | null | This PR describes loading a dataset from the Hub by specifying a relative path in `data_dir` or `data_files` in `load_dataset` (see #4757). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4773/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4773/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4773.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4773",
"merged_at": "2022-08-25T18:34:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4773.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4773"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the feedback!\r\n\r\nI agree that adding it to `load_hub.mdx` is probably a bit too specific, especially for beginners reading the tutorials. Since this clarification is closely related to loading from the Hub (the only difference being the presence/absence of a loading script), I think it makes the most sense to keep it somewhere in `loading.mdx`. What do you think about adding a Warning in Loading >>> Hugging Face Hub that explains the difference between relative/absolute paths when there is a script?",
"What about updating the section about \"manual download\" ? I think it goes there no ?\r\n\r\nhttps://huggingface.co/docs/datasets/v2.4.0/en/loading#manual-download",
"Updated the manual download section :)",
"Thanks ! Pinging @albertvillanova to review this change, and then I think we're good to merge"
] |
https://api.github.com/repos/huggingface/datasets/issues/1398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1398/comments | https://api.github.com/repos/huggingface/datasets/issues/1398/events | https://github.com/huggingface/datasets/pull/1398 | 760,497,024 | MDExOlB1bGxSZXF1ZXN0NTM1MzE4NTg5 | 1,398 | Add Neural Code Search Dataset | [] | closed | false | null | 3 | 2020-12-09T16:52:16Z | 2020-12-09T18:02:27Z | 2020-12-09T18:02:27Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1398/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1398/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1398.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1398",
"merged_at": "2020-12-09T18:02:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1398.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1398"
} | true | [
"@lhoestq Refactored into new branch, please review :) ",
"The `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine",
"merging since the CI is fixed on master"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/1170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1170/comments | https://api.github.com/repos/huggingface/datasets/issues/1170/events | https://github.com/huggingface/datasets/pull/1170 | 757,754,378 | MDExOlB1bGxSZXF1ZXN0NTMzMDczOTU0 | 1,170 | Fix path handling for Windows | [] | closed | false | null | 1 | 2020-12-05T18:31:54Z | 2020-12-07T10:47:23Z | 2020-12-07T10:47:23Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1170/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1170/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1170.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1170",
"merged_at": "2020-12-07T10:47:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1170.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1170"
} | true | [
"@lhoestq here's the fix!"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/1731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1731/comments | https://api.github.com/repos/huggingface/datasets/issues/1731/events | https://github.com/huggingface/datasets/issues/1731 | 784,744,674 | MDU6SXNzdWU3ODQ3NDQ2NzQ= | 1,731 | Couldn't reach swda.py | [] | closed | false | null | 2 | 2021-01-13T02:57:40Z | 2021-01-13T11:17:40Z | 2021-01-13T11:17:40Z | null | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1731/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1731/timeline | null | completed | null | null | false | [
"Hi @yangp725,\r\nThe SWDA has been added very recently and has not been released yet, thus it is not available in the `1.2.0` version of 🤗`datasets`.\r\nYou can still access it by installing the latest version of the library (master branch), by following instructions in [this issue](https://github.com/huggingface/datasets/issues/1641#issuecomment-751571471).\r\nLet me know if this helps !",
"Thanks @SBrandeis ,\r\nProblem solved by downloading and installing the latest version datasets."
] |
https://api.github.com/repos/huggingface/datasets/issues/5770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5770/comments | https://api.github.com/repos/huggingface/datasets/issues/5770/events | https://github.com/huggingface/datasets/pull/5770 | 1,673,581,555 | PR_kwDODunzps5OmntV | 5,770 | Add IterableDataset.from_spark | [] | closed | false | null | 8 | 2023-04-18T17:47:53Z | 2023-05-17T14:07:32Z | 2023-05-17T14:00:38Z | null | Follow-up from https://github.com/huggingface/datasets/pull/5701
Related issue: https://github.com/huggingface/datasets/issues/5678 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5770/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5770/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5770.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5770",
"merged_at": "2023-05-17T14:00:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5770.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5770"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi again @lhoestq this is ready for review! Not sure I have permission to add people to the reviewers list...",
"Cool ! I think you can define `IterableDataset.from_spark` instead of adding `streaming=` in `Dataset.from_spark`, it can be more intuitive IMO :)",
"Thanks for reviewing! I moved the streaming behavior to IterableDataset.from_spark",
"Thanks Quentin! I'll flesh out the docs in a follow-up PR",
"Friendly ping @lhoestq ",
"Thanks @lhoestq ! I fixed the partition order thing and added more unit tests.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006165 / 0.011353 (-0.005188) | 0.004497 / 0.011008 (-0.006511) | 0.099142 / 0.038508 (0.060634) | 0.027479 / 0.023109 (0.004369) | 0.352491 / 0.275898 (0.076593) | 0.402993 / 0.323480 (0.079513) | 0.004885 / 0.007986 (-0.003100) | 0.003315 / 0.004328 (-0.001013) | 0.075787 / 0.004250 (0.071537) | 0.035320 / 0.037052 (-0.001732) | 0.368401 / 0.258489 (0.109912) | 0.409090 / 0.293841 (0.115249) | 0.030125 / 0.128546 (-0.098421) | 0.011670 / 0.075646 (-0.063976) | 0.324381 / 0.419271 (-0.094890) | 0.050815 / 0.043533 (0.007283) | 0.352598 / 0.255139 (0.097460) | 0.389189 / 0.283200 (0.105989) | 0.092873 / 0.141683 (-0.048810) | 1.485140 / 1.452155 (0.032986) | 1.545586 / 1.492716 (0.052869) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.199522 / 0.018006 (0.181516) | 0.404576 / 0.000490 (0.404087) | 0.003322 / 0.000200 (0.003122) | 0.000074 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022945 / 0.037411 (-0.014466) | 0.095512 / 0.014526 (0.080987) | 0.103077 / 0.176557 (-0.073480) | 0.163918 / 0.737135 (-0.573217) | 0.105560 / 0.296338 (-0.190779) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.417360 / 0.215209 (0.202151) | 4.161693 / 2.077655 (2.084039) | 1.851941 / 1.504120 (0.347821) | 1.649872 / 1.541195 (0.108677) | 1.682099 / 1.468490 (0.213609) | 0.693187 / 4.584777 (-3.891590) | 3.462528 / 3.745712 (-0.283184) | 1.839893 / 5.269862 (-3.429968) | 1.155945 / 4.565676 (-3.409731) | 0.082611 / 0.424275 (-0.341664) | 0.012076 / 0.007607 (0.004469) | 0.514325 / 0.226044 (0.288280) | 5.155052 / 2.268929 (2.886123) | 2.307280 / 55.444624 (-53.137345) | 1.966483 / 6.876477 (-4.909994) | 2.018892 / 2.142072 (-0.123181) | 0.803068 / 4.805227 (-4.002159) | 0.152213 / 6.500664 (-6.348451) | 0.066320 / 0.075469 (-0.009149) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.218578 / 1.841788 (-0.623209) | 13.563869 / 8.074308 (5.489561) | 13.954596 / 10.191392 (3.763204) | 0.151527 / 0.680424 (-0.528897) | 0.016655 / 0.534201 (-0.517546) | 0.380637 / 0.579283 (-0.198646) | 0.395854 / 0.434364 (-0.038509) | 0.459111 / 0.540337 (-0.081226) | 0.560219 / 1.386936 (-0.826717) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006427 / 0.011353 (-0.004926) | 0.004728 / 0.011008 (-0.006280) | 0.080525 / 0.038508 (0.042017) | 0.027294 / 0.023109 (0.004185) | 0.414688 / 0.275898 (0.138790) | 0.449882 / 0.323480 (0.126402) | 0.004771 / 0.007986 (-0.003214) | 0.003402 / 0.004328 (-0.000926) | 0.078748 / 0.004250 (0.074497) | 0.037046 / 0.037052 (-0.000007) | 0.417398 / 0.258489 (0.158909) | 0.462921 / 0.293841 (0.169080) | 0.030364 / 0.128546 (-0.098182) | 0.011810 / 0.075646 (-0.063837) | 0.089787 / 0.419271 (-0.329485) | 0.039806 / 0.043533 (-0.003727) | 0.403401 / 0.255139 (0.148262) | 0.439477 / 0.283200 (0.156278) | 0.088431 / 0.141683 (-0.053252) | 1.534373 / 1.452155 (0.082219) | 1.592316 / 1.492716 (0.099600) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.217701 / 0.018006 (0.199695) | 0.384770 / 0.000490 (0.384280) | 0.000437 / 0.000200 (0.000237) | 0.000061 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024952 / 0.037411 (-0.012459) | 0.098728 / 0.014526 (0.084202) | 0.106324 / 0.176557 (-0.070233) | 0.155484 / 0.737135 (-0.581651) | 0.109503 / 0.296338 (-0.186836) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.450639 / 0.215209 (0.235430) | 4.523110 / 2.077655 (2.445455) | 2.224810 / 1.504120 (0.720690) | 2.119516 / 1.541195 (0.578321) | 2.225192 / 1.468490 (0.756702) | 0.695397 / 4.584777 (-3.889380) | 3.433559 / 3.745712 (-0.312153) | 2.633127 / 5.269862 (-2.636735) | 1.448471 / 4.565676 (-3.117206) | 0.082262 / 0.424275 (-0.342013) | 0.012246 / 0.007607 (0.004639) | 0.561243 / 0.226044 (0.335199) | 5.652711 / 2.268929 (3.383782) | 2.689771 / 55.444624 (-52.754853) | 2.359512 / 6.876477 (-4.516965) | 2.471098 / 2.142072 (0.329026) | 0.802955 / 4.805227 (-4.002272) | 0.151142 / 6.500664 (-6.349522) | 0.067494 / 0.075469 (-0.007975) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306879 / 1.841788 (-0.534909) | 14.030775 / 8.074308 (5.956467) | 12.917790 / 10.191392 (2.726398) | 0.141269 / 0.680424 (-0.539155) | 0.016264 / 0.534201 (-0.517937) | 0.411957 / 0.579283 (-0.167326) | 0.393235 / 0.434364 (-0.041129) | 0.505144 / 0.540337 (-0.035193) | 0.590660 / 1.386936 (-0.796276) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2366/comments | https://api.github.com/repos/huggingface/datasets/issues/2366/events | https://github.com/huggingface/datasets/issues/2366 | 893,185,266 | MDU6SXNzdWU4OTMxODUyNjY= | 2,366 | Json loader fails if user-specified features don't match the json data fields order | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-05-17T10:26:08Z | 2021-06-16T10:47:49Z | 2021-06-16T10:47:49Z | null | If you do
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then depending on the order of the features in the json data field it fails:
```python
[...]
~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
94 if self.config.schema:
95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT
---> 96 pa_table = pa_table.cast(self.config.schema)
97 yield i, pa_table
[...]
ValueError: Target schema's field names are not matching the table's field names: ['tokens', 'ner_tags'], ['ner_tags', 'tokens']
```
This is because one must first re-order the columns of the table to match the `self.config.schema` before calling cast.
One way to fix the `cast` would be to replace it with:
```python
# reorder the arrays if necessary + cast to schema
# we can't simply use .cast here because we may need to change the order of the columns
pa_table = pa.Table.from_arrays([pa_table[name] for name in schema.names], schema=schema)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2366/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4293/comments | https://api.github.com/repos/huggingface/datasets/issues/4293/events | https://github.com/huggingface/datasets/pull/4293 | 1,228,815,477 | PR_kwDODunzps43dRt9 | 4,293 | Fix wrong map parameter name in cache docs | [] | closed | false | null | 1 | 2022-05-08T07:27:46Z | 2022-06-14T16:49:00Z | 2022-06-14T16:07:00Z | null | The `load_from_cache` parameter of `map` should be `load_from_cache_file`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4293/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4293/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4293.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4293",
"merged_at": "2022-06-14T16:07:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4293.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4293"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1365/comments | https://api.github.com/repos/huggingface/datasets/issues/1365/events | https://github.com/huggingface/datasets/pull/1365 | 760,188,457 | MDExOlB1bGxSZXF1ZXN0NTM1MDYxNTI2 | 1,365 | Add Mkqa dataset | [] | closed | false | null | 2 | 2020-12-09T10:06:33Z | 2020-12-10T15:37:56Z | 2020-12-10T15:37:56Z | null | # MKQA: Multilingual Knowledge Questions & Answers Dataset
Adding the [MKQA](https://github.com/apple/ml-mkqa) dataset as part of the sprint 🎉
There is no official data splits so I added just a `train` split.
differently from the original:
- answer:type field is a ClassLabel (I thought it might be possible to train on this as a label for categorizing questions)
- answer:entity field has a default value of empty string '' (since this key is not available for all in original)
- answer:alias has default value of []
- [x] All tests passed
- [x] Added dummy data
- [x] Added data card (as much as I could)
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1365/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1365/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1365.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1365",
"merged_at": "2020-12-10T15:37:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1365.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1365"
} | true | [
"the `RemoteDatasetTest ` error pf the CI is fixed on master so it's fine",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/3830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3830/comments | https://api.github.com/repos/huggingface/datasets/issues/3830/events | https://github.com/huggingface/datasets/issues/3830 | 1,160,181,404 | I_kwDODunzps5FJvac | 3,830 | Got error when load cnn_dailymail dataset | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | 2 | 2022-03-05T01:43:12Z | 2022-03-07T06:53:41Z | 2022-03-07T06:53:41Z | null | When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below:
- windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories'
- google colab: NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
The code is to load dataset:
windows os:
```
from datasets import load_dataset
dataset = load_dataset("cnn_dailymail", "3.0.0", cache_dir="D:\\SourceCode\\DataScience\\HuggingFace\\Data")
```
google colab:
```
import datasets
train_data = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train")
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3830/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3830/timeline | null | completed | null | null | false | [
"Was able to reproduce the issue on Colab; full logs below. \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotADirectoryError Traceback (most recent call last)\r\n[<ipython-input-2-39967739ba7f>](https://localhost:8080/#) in <module>()\r\n 1 import datasets\r\n 2 \r\n----> 3 train_data = datasets.load_dataset(\"cnn_dailymail\", \"3.0.0\", split=\"train\")\r\n\r\n5 frames\r\n[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs)\r\n 1705 ignore_verifications=ignore_verifications,\r\n 1706 try_from_hf_gcs=try_from_hf_gcs,\r\n-> 1707 use_auth_token=use_auth_token,\r\n 1708 )\r\n 1709 \r\n\r\n[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)\r\n 593 if not downloaded_from_gcs:\r\n 594 self._download_and_prepare(\r\n--> 595 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n 596 )\r\n 597 # Sync info\r\n\r\n[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)\r\n 659 split_dict = SplitDict(dataset_name=self.name)\r\n 660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)\r\n--> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n 662 \r\n 663 # Checksums verification\r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _split_generators(self, dl_manager)\r\n 253 def _split_generators(self, dl_manager):\r\n 254 dl_paths = dl_manager.download_and_extract(_DL_URLS)\r\n--> 255 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN)\r\n 256 # Generate shared vocabulary\r\n 257 \r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _subset_filenames(dl_paths, split)\r\n 154 else:\r\n 155 logger.fatal(\"Unsupported split: %s\", split)\r\n--> 156 cnn = _find_files(dl_paths, \"cnn\", urls)\r\n 157 dm = _find_files(dl_paths, \"dm\", urls)\r\n 158 return cnn + dm\r\n\r\n[/root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py](https://localhost:8080/#) in _find_files(dl_paths, publisher, url_dict)\r\n 133 else:\r\n 134 logger.fatal(\"Unsupported publisher: %s\", publisher)\r\n--> 135 files = sorted(os.listdir(top_dir))\r\n 136 \r\n 137 ret_files = []\r\n\r\nNotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'\r\n```",
"Hi @jon-tow, thanks for reporting. And hi @dynamicwebpaige, thanks for your investigation. \r\n\r\nThis issue was already reported \r\n- #3784\r\n\r\nand its root cause is a change in the Google Drive service. See:\r\n- #3786 \r\n\r\nWe have already fixed it. See:\r\n- #3787 \r\n\r\nWe are planning to make a patch release today (indeed, we were planning to do it last Friday).\r\n\r\nIn the meantime, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```\r\n\r\nCC: @lhoestq "
] |
https://api.github.com/repos/huggingface/datasets/issues/4076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4076/comments | https://api.github.com/repos/huggingface/datasets/issues/4076/events | https://github.com/huggingface/datasets/pull/4076 | 1,188,478,867 | PR_kwDODunzps41a1n2 | 4,076 | Add ROUGE Metric Card | [] | closed | false | null | 1 | 2022-03-31T18:34:34Z | 2022-04-12T20:43:45Z | 2022-04-12T20:37:38Z | null | Add ROUGE metric card.
I've left the 'Values from popular papers' section empty for the time being because I don't know the summarization literature very well and am therefore not sure which paper(s) to pull from (note that the original rouge paper does not seem to present specific values, just correlations with human judgements). Any suggestions on which paper(s) to pull from would be helpful! :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4076/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4076/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4076.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4076",
"merged_at": "2022-04-12T20:37:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4076.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4076"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/825/comments | https://api.github.com/repos/huggingface/datasets/issues/825/events | https://github.com/huggingface/datasets/pull/825 | 739,925,960 | MDExOlB1bGxSZXF1ZXN0NTE4NTAyNjgx | 825 | Add accuracy, precision, recall and F1 metrics | [] | closed | false | null | 0 | 2020-11-10T13:50:35Z | 2020-11-11T19:23:48Z | 2020-11-11T19:23:43Z | null | This PR adds several single metrics, namely:
- Accuracy
- Precision
- Recall
- F1
They all uses under the hood the sklearn metrics of the same name. They allow different useful features when training a multilabel/multiclass model:
- have a macro/micro/per label/weighted/binary/per sample score
- score only the selected labels (usually what we call the positive labels) and ignore the negative ones. For example in case of a Named Entity Recognition task, positive labels are (`PERSON`, `LOCATION` or `ORGANIZATION`) and the negative one is `O`. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/825/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/825/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/825.diff",
"html_url": "https://github.com/huggingface/datasets/pull/825",
"merged_at": "2020-11-11T19:23:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/825.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/825"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3047/comments | https://api.github.com/repos/huggingface/datasets/issues/3047/events | https://github.com/huggingface/datasets/issues/3047 | 1,021,360,616 | I_kwDODunzps484Lno | 3,047 | Loading from cache a dataset for LM built from a text classification dataset sometimes errors | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-10-08T18:23:11Z | 2021-11-03T17:13:08Z | 2021-11-03T17:13:08Z | null | ## Describe the bug
Yes, I know, that description sucks. So the problem is arising in the course when we build a masked language modeling dataset using the IMDB dataset. To reproduce (or try since it's a bit fickle).
Create a dataset for masled-language modeling from the IMDB dataset.
```python
from datasets import load_dataset
from transformers import Autotokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased)
imdb_dataset = load_dataset("imdb", split="train")
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_dataset = imdb_dataset.map(
tokenize_function, batched=True, remove_columns=["text", "label"]
)
chunk_size = 128
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
# Compute length of concatenated texts
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the last chunk if it's smaller than chunk_size
total_length = (total_length // chunk_size) * chunk_size
# Split by chunks of max_len.
result = {
k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)]
for k, t in concatenated_examples.items()
}
# Create a new labels column
result["labels"] = result["input_ids"].copy()
return result
lm_dataset = tokenized_dataset.map(group_texts, batched=True)
```
Until now, all is well. The problem comes when you re-execute that code, more specifically:
```python
tokenized_dataset = imdb_dataset.map(
tokenize_function, batched=True, remove_columns=["text", "label"]
)
lm_dataset = tokenized_dataset.map(group_texts, batched=True)
```
Try several times if the bug doesn't appear instantly, or just each line at a time, ideally in a notebook/Colab and you should get at some point:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-40-357a56ee3d53> in <module>
----> 1 lm_dataset = tokenized_dataset.map(group_texts, batched=True)
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1947 new_fingerprint=new_fingerprint,
1948 disable_tqdm=disable_tqdm,
-> 1949 desc=desc,
1950 )
1951 else:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
424 }
425 # apply actual function
--> 426 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
427 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
428 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2138 if os.path.exists(cache_file_name) and load_from_cache_file:
2139 logger.warning("Loading cached processed dataset at %s", cache_file_name)
-> 2140 info = self.info.copy()
2141 info.features = features
2142 return Dataset.from_file(cache_file_name, info=info, split=self.split)
~/git/datasets/src/datasets/info.py in copy(self)
278
279 def copy(self) -> "DatasetInfo":
--> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
281
282
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
177 for idx, template in enumerate(self.task_templates):
178 if isinstance(template, TextClassification):
--> 179 labels = self.features[template.label_column].names
180 self.task_templates[idx] = TextClassification(
181 text_column=template.text_column, label_column=template.label_column, labels=labels
KeyError: 'label'
```
It seems that when loading the cache, the dataset tries to access some kind of text classification template (which I imagine comes from the original dataset) and to look at a key that has since been removed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3047/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3047/timeline | null | completed | null | null | false | [
"This has been fixed in 1.15, let me know if you still have this issue"
] |
https://api.github.com/repos/huggingface/datasets/issues/2534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2534/comments | https://api.github.com/repos/huggingface/datasets/issues/2534/events | https://github.com/huggingface/datasets/pull/2534 | 927,201,435 | MDExOlB1bGxSZXF1ZXN0Njc1MzkzODg0 | 2,534 | Sync with transformers disabling NOTSET | [] | closed | false | null | 2 | 2021-06-22T12:54:21Z | 2021-06-24T14:42:47Z | 2021-06-24T14:42:47Z | null | Close #2528. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2534/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2534/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2534.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2534",
"merged_at": "2021-06-24T14:42:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2534.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2534"
} | true | [
"Nice thanks ! I think there are other places with\r\n```python\r\nnot_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n```\r\n\r\nCould you replace them as well ?",
"Sure @lhoestq! I was not sure if this change should only be circumscribed to `http_get`..."
] |
https://api.github.com/repos/huggingface/datasets/issues/2470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2470/comments | https://api.github.com/repos/huggingface/datasets/issues/2470/events | https://github.com/huggingface/datasets/issues/2470 | 916,724,260 | MDU6SXNzdWU5MTY3MjQyNjA= | 2,470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 6 | 2021-06-09T22:40:22Z | 2021-07-01T09:34:54Z | 2021-07-01T09:11:13Z | null | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any tips greatly appreciated, I'm happy to provide more info if it would helps us diagnose.
## Steps to reproduce the bug
```python
# this function will be applied with map()
def tokenize_function(examples):
return tokenizer(
examples["text"],
padding=PaddingStrategy.DO_NOT_PAD,
truncation=True,
)
# data_files is a Dict[str, str] mapping name -> path
datasets = load_dataset("text", data_files={...})
# this is where the error happens if num_proc = 16,
# but is fine if num_proc = 1
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=num_workers,
)
```
## Expected results
The `map()` function succeeds with `num_proc` > 1.
## Actual results


## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, but I think N/A for this issue
- Using distributed or parallel set-up in script?: Multi-GPU on one machine, but I think also N/A for this issue
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2470/timeline | null | completed | null | null | false | [
"Hi ! It looks like the issue comes from pyarrow. What version of pyarrow are you using ? How did you install it ?",
"Thank you for the quick reply! I have `pyarrow==4.0.0`, and I am installing with `pip`. It's not one of my explicit dependencies, so I assume it came along with something else.",
"Could you trying reinstalling pyarrow with pip ?\r\nI'm not sure why it would check in your multicurtural-sc directory for source files.",
"Sure! I tried reinstalling to get latest. pip was mad because it looks like Datasets currently wants <4.0.0 (which is interesting, because apparently I ended up with 4.0.0 already?), but I gave it a shot anyway:\r\n\r\n```bash\r\n$ pip install --upgrade --force-reinstall pyarrow\r\nCollecting pyarrow\r\n Downloading pyarrow-4.0.1-cp39-cp39-manylinux2014_x86_64.whl (21.9 MB)\r\n |████████████████████████████████| 21.9 MB 23.8 MB/s\r\nCollecting numpy>=1.16.6\r\n Using cached numpy-1.20.3-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.4 MB)\r\nInstalling collected packages: numpy, pyarrow\r\n Attempting uninstall: numpy\r\n Found existing installation: numpy 1.20.3\r\n Uninstalling numpy-1.20.3:\r\n Successfully uninstalled numpy-1.20.3\r\n Attempting uninstall: pyarrow\r\n Found existing installation: pyarrow 3.0.0\r\n Uninstalling pyarrow-3.0.0:\r\n Successfully uninstalled pyarrow-3.0.0\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\ndatasets 1.8.0 requires pyarrow<4.0.0,>=1.0.0, but you have pyarrow 4.0.1 which is incompatible.\r\nSuccessfully installed numpy-1.20.3 pyarrow-4.0.1\r\n```\r\n\r\nTrying it, the same issue:\r\n\r\n\r\n\r\nI tried installing `\"pyarrow<4.0.0\"`, which gave me 3.0.0. Running, still, same issue.\r\n\r\nI agree it's weird that pyarrow is checking the source code directory for its files. (There is no `pyarrow/` directory there.) To me, that makes it seem like an issue with how pyarrow is called.\r\n\r\nOut of curiosity, I tried running this with fewer workers to see when the error arises:\r\n\r\n- 1: ✅\r\n- 2: ✅\r\n- 4: ✅\r\n- 8: ✅\r\n- 10: ✅\r\n- 11: ❌ 🤔\r\n- 12: ❌\r\n- 16: ❌\r\n- 32: ❌\r\n\r\nchecking my datasets:\r\n\r\n```python\r\n>>> datasets\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text'],\r\n num_rows: 389290\r\n })\r\n validation.sc: Dataset({\r\n features: ['text'],\r\n num_rows: 10 # 🤔\r\n })\r\n validation.wvs: Dataset({\r\n features: ['text'],\r\n num_rows: 93928\r\n })\r\n})\r\n```\r\n\r\nNew hypothesis: crash if `num_proc` > length of a dataset? 😅\r\n\r\nIf so, this might be totally my fault, as the caller. Could be a docs fix, or maybe this library could do a check to limit `num_proc` for this case?",
"Good catch ! Not sure why it could raise such a weird issue from pyarrow though\r\nWe should definitely reduce num_proc to the length of the dataset if needed and log a warning.",
"This has been fixed in #2566, thanks @connor-mccarthy !\r\nWe'll make a new release soon that includes the fix ;)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3032/comments | https://api.github.com/repos/huggingface/datasets/issues/3032/events | https://github.com/huggingface/datasets/issues/3032 | 1,016,488,475 | I_kwDODunzps48lmIb | 3,032 | Error when loading private dataset with "data_files" arg | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-10-05T15:46:27Z | 2021-10-12T15:26:22Z | 2021-10-12T15:25:46Z | null | ## Describe the bug
A clear and concise description of what the bug is.
Private datasets with no loading script can't be loaded using `data_files` parameter.
## Steps to reproduce the bug
```python
from datasets import load_dataset
data_files = {"train": "**/train/*/*.jsonl", "valid": "**/valid/*/*.jsonl"}
dataset = load_dataset('dalle-mini/encoded', data_files=data_files, use_auth_token=True, streaming=True)
```
Same error happens in non-streaming mode.
## Expected results
Files should be loaded (whether in streaming or not).
## Actual results
Error:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
539 try:
--> 540 local_path = cached_path(file_path, download_config=download_config)
541 except FileNotFoundError:
8 frames
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/dalle-mini/encoded/resolve/main/encoded.py
During handling of the above exception, another exception occurred:
HTTPError Traceback (most recent call last)
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/datasets/dalle-mini/encoded?full=true
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
547 except Exception:
548 raise FileNotFoundError(
--> 549 f"Couldn't find a directory or a {resource_type} named '{path}'. "
550 f"It doesn't exist locally at {expected_dir_for_combined_path_abs} or remotely on {hf_api.endpoint}/datasets"
551 )
FileNotFoundError: Couldn't find a directory or a dataset named 'dalle-mini/encoded'. It doesn't exist locally at /content/dalle-mini/encoded or remotely on https://huggingface.co/datasets
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3032/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3032/timeline | null | completed | null | null | false | [
"We'll do a release tomorrow or on wednesday to make the fix available :)\r\n\r\nThanks for reproting !"
] |
https://api.github.com/repos/huggingface/datasets/issues/449 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/449/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/449/comments | https://api.github.com/repos/huggingface/datasets/issues/449/events | https://github.com/huggingface/datasets/pull/449 | 666,898,923 | MDExOlB1bGxSZXF1ZXN0NDU3NjY0NjYx | 449 | add reuters21578 dataset | [] | closed | false | null | 3 | 2020-07-28T08:58:12Z | 2020-08-03T11:10:31Z | 2020-08-03T11:10:31Z | null | This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html
#353
The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read line by line (maybe there is a better way to do, happy to get your opinion on it)
In the Readme file 3 ways to split the dataset are given.:
- The Modified Lewis ("ModLewis") Split: train, test and unused-set
- The Modified Apte ("ModApte") Split : train, test and unused-set
- The Modified Hayes ("ModHayes") Split: train and test
Here I consider the last one as the readme file highlight that this split provides the ability to compare results with those of the 2 first splits.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/449/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/449/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/449.diff",
"html_url": "https://github.com/huggingface/datasets/pull/449",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/449.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/449"
} | true | [
"> Awesome !\r\n> Good job on parsing these files :O\r\n> \r\n> Do you think it would be hard to get the two other split configurations ?\r\n\r\nIt shouldn't be that hard, I think I can consider different config names for each split ",
"> > Awesome !\r\n> > Good job on parsing these files :O\r\n> > Do you think it would be hard to get the two other split configurations ?\r\n> \r\n> It shouldn't be that hard, I think I can consider different config names for each split\r\n\r\nYes that would be perfect",
"closing this PR and opening a new one to fix the circle CI problems"
] |
https://api.github.com/repos/huggingface/datasets/issues/4079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4079/comments | https://api.github.com/repos/huggingface/datasets/issues/4079/events | https://github.com/huggingface/datasets/pull/4079 | 1,189,521,576 | PR_kwDODunzps41eYRC | 4,079 | Increase max retries for GitHub datasets | [] | closed | false | null | 1 | 2022-04-01T09:34:03Z | 2022-04-01T15:32:40Z | 2022-04-01T15:27:11Z | null | As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub datasets, as previously done for GitHub metrics:
- #4063
Note that this is a temporary solution, while we decide when and how to load GitHub datasets from the Hub:
- #4059
Fix #2048
Related to:
- #4051
- #3210
- #2787
- #2075
- #2036
CC: @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4079/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4079/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4079.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4079",
"merged_at": "2022-04-01T15:27:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4079.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4079"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2624/comments | https://api.github.com/repos/huggingface/datasets/issues/2624/events | https://github.com/huggingface/datasets/issues/2624 | 941,318,247 | MDU6SXNzdWU5NDEzMTgyNDc= | 2,624 | can't set verbosity for `metric.py` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-07-10T20:23:45Z | 2021-07-12T05:54:29Z | 2021-07-12T05:54:29Z | null | ## Describe the bug
```
[2021-07-10 20:13:11,528][datasets.utils.filelock][INFO] - Lock 139705371374976 acquired on /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.lock
[2021-07-10 20:13:11,529][datasets.arrow_writer][INFO] - Done writing 32 examples in 6100 bytes /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.
[2021-07-10 20:13:11,531][datasets.arrow_dataset][INFO] - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns.
[2021-07-10 20:13:11,543][/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric.py][INFO] - Removing /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow
```
As you can see, `datasets` logging come from different places.
`filelock`, `arrow_writer` & `arrow_dataset` comes from `datasets.*` which are expected
However, `metric.py` logging comes from `/conda/envs/myenv/lib/python3.8/site-packages/datasets/`
So when setting `datasets.utils.logging.set_verbosity_error()`, it still logs the last message which is annoying during evaluation.
I had to do
```
logging.getLogger("/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric").setLevel(logging.ERROR)
```
to fully mute these messages
## Expected results
it shouldn't log these messages when setting `datasets.utils.logging.set_verbosity_error()`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: tried both 1.8.0 & 1.9.0
- Platform: Ubuntu 18.04.5 LTS
- Python version: 3.8.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2624/timeline | null | completed | null | null | false | [
"Thanks @thomas-happify for reporting and thanks @mariosasko for the fix."
] |
https://api.github.com/repos/huggingface/datasets/issues/981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/981/comments | https://api.github.com/repos/huggingface/datasets/issues/981/events | https://github.com/huggingface/datasets/pull/981 | 754,937,612 | MDExOlB1bGxSZXF1ZXN0NTMwNzQ0MTYx | 981 | add wisesight_sentiment take2 | [] | closed | false | null | 0 | 2020-12-02T04:50:59Z | 2020-12-02T10:37:13Z | 2020-12-02T10:37:13Z | null | Take 2 since last time the rebase issues were taking me too much time to fix as opposed to just open a new one. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/981/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/981/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/981.diff",
"html_url": "https://github.com/huggingface/datasets/pull/981",
"merged_at": "2020-12-02T10:37:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/981.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/981"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2219/comments | https://api.github.com/repos/huggingface/datasets/issues/2219/events | https://github.com/huggingface/datasets/pull/2219 | 857,321,242 | MDExOlB1bGxSZXF1ZXN0NjE0NzYxMzA3 | 2,219 | Added CUAD dataset | [] | closed | false | null | 3 | 2021-04-13T21:05:03Z | 2021-04-24T14:25:51Z | 2021-04-16T08:50:44Z | null | Dataset link : https://github.com/TheAtticusProject/cuad/
Working on README.md currently.
Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2219/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2219.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2219",
"merged_at": "2021-04-16T08:50:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2219.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2219"
} | true | [
"1) Changed the language in a few places apart from those you mentioned in README\r\n2) Reduced the size of dummy data folder by removing all other entries except the first\r\n3) Updated YAML tags by using to the past version of `datasets-tagging` app. Will update the quick fix on that repository too in a while",
"@bhavitvyamalik Thanks for adding the dataset on huggingface! Can you please add a metric also for the dataset using the squad_v2 metric file? ",
"@MohammedRakib you can check [#2257](https://github.com/huggingface/datasets/pull/2257)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2135/comments | https://api.github.com/repos/huggingface/datasets/issues/2135/events | https://github.com/huggingface/datasets/issues/2135 | 843,246,344 | MDU6SXNzdWU4NDMyNDYzNDQ= | 2,135 | en language data from MLQA dataset is missing | [] | closed | false | null | 3 | 2021-03-29T10:47:50Z | 2021-03-30T10:20:23Z | 2021-03-30T10:20:23Z | null | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2135/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2135/timeline | null | completed | null | null | false | [
"Hi ! Indeed only the languages of the `translate-train` data are included...\r\nI can't find a link to download the english train set on https://github.com/facebookresearch/MLQA though, do you know where we can download it ?",
"Hi @lhoestq \r\nthank you very much for coming back to me, now I see, you are right, in the link you sent I see split of {split}-context-{context_language}-question-{question_language}.json with context_language=question_language=en, TFDS most probably has extracted english ones from these files as en language files, but translate-train/test do not have en indeed. thanks a lot for the great explanations",
"I close the ticket, since I do not see any en existing, they have trained on \"SQuAD V1.1\" instead. Thanks. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1596 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1596/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1596/comments | https://api.github.com/repos/huggingface/datasets/issues/1596/events | https://github.com/huggingface/datasets/pull/1596 | 770,260,531 | MDExOlB1bGxSZXF1ZXN0NTQyMDM3NTU0 | 1,596 | made suggested changes to hate-speech-and-offensive-language | [] | closed | false | null | 0 | 2020-12-17T18:09:26Z | 2020-12-17T18:36:02Z | 2020-12-17T18:35:53Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1596/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1596/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1596.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1596",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1596.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1596"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/3856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3856/comments | https://api.github.com/repos/huggingface/datasets/issues/3856/events | https://github.com/huggingface/datasets/pull/3856 | 1,162,522,034 | PR_kwDODunzps40GUSf | 3,856 | Fix push_to_hub with null images | [] | closed | false | null | 1 | 2022-03-08T11:07:09Z | 2022-03-08T15:22:17Z | 2022-03-08T15:22:16Z | null | This code currently raises an error because of the null image:
```python
import datasets
dataset_dict = { 'name': ['image001.jpg', 'image002.jpg'], 'image': ['cat.jpg', None] }
features = datasets.Features({
'name': datasets.Value('string'),
'image': datasets.Image(),
})
dataset = datasets.Dataset.from_dict(dataset_dict, features)
dataset.push_to_hub("username/dataset") # this line produces an error: 'NoneType' object is not subscriptable
```
I fixed this in this PR
TODO:
- [x] add a test | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3856/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3856/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3856.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3856",
"merged_at": "2022-03-08T15:22:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3856.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3856"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3856). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/3392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3392/comments | https://api.github.com/repos/huggingface/datasets/issues/3392/events | https://github.com/huggingface/datasets/issues/3392 | 1,073,073,408 | I_kwDODunzps4_9c0A | 3,392 | Dataset viewer issue for `dansbecker/hackernews_hiring_posts` | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2021-12-07T08:41:01Z | 2021-12-07T14:04:28Z | 2021-12-07T14:04:28Z | null | ## Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
**Link:** https://huggingface.co/datasets/dansbecker/hackernews_hiring_posts
*short description of the issue*
Dataset preview not showing for uploaded DatasetDict. See https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-datasetdict/12603
Am I the one who added this dataset ?
No -> @dansbecker | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3392/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3392/timeline | null | completed | null | null | false | [
"This issue was fixed by me calling `all_datasets.push_to_hub(\"hackernews_hiring_posts\")`.\r\n\r\nThe previous problems were from calling `all_datasets.save_to_disk` and then pushing with `my_repo.git_add` and `my_repo.push_to_hub`.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/439/comments | https://api.github.com/repos/huggingface/datasets/issues/439/events | https://github.com/huggingface/datasets/issues/439 | 665,964,673 | MDU6SXNzdWU2NjU5NjQ2NzM= | 439 | Issues: Adding a FAISS or Elastic Search index to a Dataset | [] | closed | false | null | 5 | 2020-07-27T04:25:17Z | 2020-10-28T01:46:24Z | 2020-10-28T01:46:24Z | null | It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on the latest PyArrow 1.0.0 ? Is it yet to be made generally available ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/439/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/439/timeline | null | completed | null | null | false | [
"`DPRContextEncoder` and `DPRContextEncoderTokenizer` will be available in the next release of `transformers`.\r\n\r\nRight now you can experiment with it by installing `transformers` from the master branch.\r\nYou can also check the docs of DPR [here](https://huggingface.co/transformers/master/model_doc/dpr.html).\r\n\r\nMoreover all the indexing features will also be available in the next release of `nlp`.",
"@lhoestq Thanks for the info ",
"@lhoestq I tried installing transformer from the master branch. Python imports for DPR again didnt' work. Anyways, Looking forward to trying it in the next release of nlp ",
"@nsankar have you tried with the latest version of the library?",
"@yjernite it worked. Thanks"
] |
https://api.github.com/repos/huggingface/datasets/issues/5167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5167/comments | https://api.github.com/repos/huggingface/datasets/issues/5167/events | https://github.com/huggingface/datasets/pull/5167 | 1,424,124,477 | PR_kwDODunzps5BljPw | 5,167 | Add ffmpeg4 installation instructions in warnings | [] | closed | false | null | 3 | 2022-10-26T14:21:14Z | 2022-10-27T09:01:12Z | 2022-10-27T08:58:58Z | null | Adds instructions on how to install `ffmpeg=4` on Linux (relevant for Colab users).
Looks pretty ugly because I didn't find a way to check `ffmpeg` version from python (without `subprocess.call()`; `ctypes.util.find_library` doesn't work`), so the warning is raised on each decoding. Any suggestions on how to make it look nice are welcome!
This is how it looks on Colab:

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5167/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5167/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5167.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5167",
"merged_at": "2022-10-27T08:58:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5167.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5167"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"To make it warn only once, feel free to use a global counter in python - and if the warning has already been done, you don't do it again",
"> Added the same formatting for the error message :)\r\n\r\nnice!! thank you! \r\n\r\n> Oh and regarding the warning counter, you can do it in another PR maybe ?\r\n\r\nYes, more warnings is better then no warnings.... I'll merge when the CI passes"
] |
https://api.github.com/repos/huggingface/datasets/issues/1416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1416/comments | https://api.github.com/repos/huggingface/datasets/issues/1416/events | https://github.com/huggingface/datasets/pull/1416 | 760,653,971 | MDExOlB1bGxSZXF1ZXN0NTM1NDUwMTIz | 1,416 | Add Shrinked Turkish NER from Kaggle. | [] | closed | false | null | 0 | 2020-12-09T20:38:35Z | 2020-12-11T11:23:31Z | 2020-12-11T11:23:31Z | null | Add Shrinked Turkish NER from [Kaggle](https://www.kaggle.com/behcetsenturk/shrinked-twnertc-turkish-ner-data-by-kuzgunlar). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1416/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1416/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1416.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1416",
"merged_at": "2020-12-11T11:23:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1416.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1416"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6000/comments | https://api.github.com/repos/huggingface/datasets/issues/6000/events | https://github.com/huggingface/datasets/pull/6000 | 1,782,456,878 | PR_kwDODunzps5UU_FB | 6,000 | Pin `joblib` to avoid `joblibspark` test failures | [] | closed | false | null | 4 | 2023-06-30T12:36:54Z | 2023-06-30T13:17:05Z | 2023-06-30T13:08:27Z | null | `joblibspark` doesn't support the latest `joblib` release.
See https://github.com/huggingface/datasets/actions/runs/5401870932/jobs/9812337078 for the errors | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6000/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6000/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6000.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6000",
"merged_at": "2023-06-30T13:08:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6000.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6000"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006722 / 0.011353 (-0.004631) | 0.004425 / 0.011008 (-0.006583) | 0.100850 / 0.038508 (0.062341) | 0.040816 / 0.023109 (0.017707) | 0.348823 / 0.275898 (0.072925) | 0.446285 / 0.323480 (0.122805) | 0.005738 / 0.007986 (-0.002247) | 0.003517 / 0.004328 (-0.000811) | 0.078824 / 0.004250 (0.074574) | 0.064695 / 0.037052 (0.027643) | 0.389894 / 0.258489 (0.131405) | 0.416107 / 0.293841 (0.122266) | 0.028850 / 0.128546 (-0.099696) | 0.009011 / 0.075646 (-0.066635) | 0.323117 / 0.419271 (-0.096154) | 0.049162 / 0.043533 (0.005629) | 0.340144 / 0.255139 (0.085005) | 0.382072 / 0.283200 (0.098872) | 0.023160 / 0.141683 (-0.118523) | 1.549218 / 1.452155 (0.097063) | 1.581266 / 1.492716 (0.088550) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.293360 / 0.018006 (0.275353) | 0.602189 / 0.000490 (0.601700) | 0.004608 / 0.000200 (0.004408) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028144 / 0.037411 (-0.009267) | 0.107088 / 0.014526 (0.092562) | 0.112188 / 0.176557 (-0.064369) | 0.174669 / 0.737135 (-0.562466) | 0.116359 / 0.296338 (-0.179980) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422911 / 0.215209 (0.207702) | 4.231524 / 2.077655 (2.153869) | 1.906711 / 1.504120 (0.402591) | 1.706841 / 1.541195 (0.165646) | 1.792066 / 1.468490 (0.323576) | 0.559221 / 4.584777 (-4.025556) | 3.434280 / 3.745712 (-0.311433) | 1.918714 / 5.269862 (-3.351148) | 1.073070 / 4.565676 (-3.492606) | 0.067891 / 0.424275 (-0.356384) | 0.011927 / 0.007607 (0.004320) | 0.530843 / 0.226044 (0.304799) | 5.309213 / 2.268929 (3.040285) | 2.439246 / 55.444624 (-53.005378) | 2.101245 / 6.876477 (-4.775231) | 2.177436 / 2.142072 (0.035363) | 0.672150 / 4.805227 (-4.133077) | 0.137571 / 6.500664 (-6.363093) | 0.068343 / 0.075469 (-0.007126) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265262 / 1.841788 (-0.576525) | 14.988021 / 8.074308 (6.913713) | 13.611677 / 10.191392 (3.420285) | 0.171389 / 0.680424 (-0.509035) | 0.017681 / 0.534201 (-0.516520) | 0.377542 / 0.579283 (-0.201741) | 0.399475 / 0.434364 (-0.034889) | 0.469553 / 0.540337 (-0.070785) | 0.561888 / 1.386936 (-0.825048) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006782 / 0.011353 (-0.004571) | 0.004412 / 0.011008 (-0.006597) | 0.078594 / 0.038508 (0.040086) | 0.039930 / 0.023109 (0.016820) | 0.371879 / 0.275898 (0.095981) | 0.444910 / 0.323480 (0.121430) | 0.005707 / 0.007986 (-0.002279) | 0.003901 / 0.004328 (-0.000427) | 0.080125 / 0.004250 (0.075875) | 0.063977 / 0.037052 (0.026925) | 0.382781 / 0.258489 (0.124292) | 0.441791 / 0.293841 (0.147950) | 0.030428 / 0.128546 (-0.098118) | 0.009008 / 0.075646 (-0.066638) | 0.084447 / 0.419271 (-0.334824) | 0.044432 / 0.043533 (0.000899) | 0.365686 / 0.255139 (0.110547) | 0.394312 / 0.283200 (0.111113) | 0.024508 / 0.141683 (-0.117175) | 1.577020 / 1.452155 (0.124865) | 1.630259 / 1.492716 (0.137543) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.307960 / 0.018006 (0.289953) | 0.591473 / 0.000490 (0.590983) | 0.008098 / 0.000200 (0.007898) | 0.000110 / 0.000054 (0.000056) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029567 / 0.037411 (-0.007845) | 0.112773 / 0.014526 (0.098247) | 0.117362 / 0.176557 (-0.059194) | 0.174293 / 0.737135 (-0.562843) | 0.123156 / 0.296338 (-0.173182) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.457475 / 0.215209 (0.242266) | 4.599067 / 2.077655 (2.521412) | 2.262638 / 1.504120 (0.758518) | 2.124943 / 1.541195 (0.583748) | 2.339912 / 1.468490 (0.871422) | 0.566264 / 4.584777 (-4.018513) | 3.489261 / 3.745712 (-0.256451) | 1.925151 / 5.269862 (-3.344711) | 1.099389 / 4.565676 (-3.466287) | 0.068232 / 0.424275 (-0.356043) | 0.011660 / 0.007607 (0.004052) | 0.571227 / 0.226044 (0.345183) | 5.702059 / 2.268929 (3.433130) | 2.837701 / 55.444624 (-52.606924) | 2.605468 / 6.876477 (-4.271008) | 2.818396 / 2.142072 (0.676323) | 0.681856 / 4.805227 (-4.123371) | 0.141401 / 6.500664 (-6.359263) | 0.069728 / 0.075469 (-0.005741) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.354935 / 1.841788 (-0.486853) | 15.437404 / 8.074308 (7.363095) | 15.415193 / 10.191392 (5.223801) | 0.153459 / 0.680424 (-0.526964) | 0.017190 / 0.534201 (-0.517011) | 0.367256 / 0.579283 (-0.212027) | 0.392709 / 0.434364 (-0.041655) | 0.426125 / 0.540337 (-0.114213) | 0.522612 / 1.386936 (-0.864324) |\n\n</details>\n</details>\n\n\n",
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009183 / 0.011353 (-0.002170) | 0.005232 / 0.011008 (-0.005776) | 0.120349 / 0.038508 (0.081841) | 0.044715 / 0.023109 (0.021606) | 0.361519 / 0.275898 (0.085621) | 0.463702 / 0.323480 (0.140223) | 0.005842 / 0.007986 (-0.002144) | 0.004041 / 0.004328 (-0.000288) | 0.096953 / 0.004250 (0.092703) | 0.070593 / 0.037052 (0.033540) | 0.409790 / 0.258489 (0.151301) | 0.477452 / 0.293841 (0.183611) | 0.045827 / 0.128546 (-0.082719) | 0.014038 / 0.075646 (-0.061608) | 0.421317 / 0.419271 (0.002045) | 0.065276 / 0.043533 (0.021743) | 0.360074 / 0.255139 (0.104935) | 0.409147 / 0.283200 (0.125947) | 0.032444 / 0.141683 (-0.109238) | 1.739257 / 1.452155 (0.287102) | 1.831408 / 1.492716 (0.338692) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.274852 / 0.018006 (0.256846) | 0.596320 / 0.000490 (0.595830) | 0.006399 / 0.000200 (0.006199) | 0.000133 / 0.000054 (0.000079) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031400 / 0.037411 (-0.006012) | 0.127052 / 0.014526 (0.112526) | 0.134269 / 0.176557 (-0.042288) | 0.225998 / 0.737135 (-0.511137) | 0.150019 / 0.296338 (-0.146319) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.654202 / 0.215209 (0.438993) | 6.216735 / 2.077655 (4.139081) | 2.440214 / 1.504120 (0.936094) | 2.150575 / 1.541195 (0.609380) | 2.124790 / 1.468490 (0.656300) | 0.923514 / 4.584777 (-3.661263) | 5.556924 / 3.745712 (1.811212) | 2.843886 / 5.269862 (-2.425975) | 1.834232 / 4.565676 (-2.731444) | 0.111735 / 0.424275 (-0.312540) | 0.014823 / 0.007607 (0.007216) | 0.820503 / 0.226044 (0.594459) | 7.887737 / 2.268929 (5.618809) | 3.120307 / 55.444624 (-52.324317) | 2.405856 / 6.876477 (-4.470621) | 2.411239 / 2.142072 (0.269167) | 1.071283 / 4.805227 (-3.733944) | 0.227738 / 6.500664 (-6.272926) | 0.073516 / 0.075469 (-0.001953) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.531806 / 1.841788 (-0.309982) | 18.547661 / 8.074308 (10.473353) | 21.083922 / 10.191392 (10.892530) | 0.241706 / 0.680424 (-0.438718) | 0.034169 / 0.534201 (-0.500032) | 0.497514 / 0.579283 (-0.081769) | 0.599801 / 0.434364 (0.165437) | 0.576465 / 0.540337 (0.036127) | 0.673509 / 1.386936 (-0.713427) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007558 / 0.011353 (-0.003795) | 0.005001 / 0.011008 (-0.006008) | 0.093809 / 0.038508 (0.055301) | 0.039792 / 0.023109 (0.016683) | 0.456869 / 0.275898 (0.180971) | 0.493370 / 0.323480 (0.169891) | 0.005561 / 0.007986 (-0.002424) | 0.003982 / 0.004328 (-0.000346) | 0.085421 / 0.004250 (0.081170) | 0.059817 / 0.037052 (0.022765) | 0.468040 / 0.258489 (0.209550) | 0.514853 / 0.293841 (0.221012) | 0.044267 / 0.128546 (-0.084279) | 0.012674 / 0.075646 (-0.062972) | 0.098324 / 0.419271 (-0.320948) | 0.056604 / 0.043533 (0.013071) | 0.432200 / 0.255139 (0.177061) | 0.459812 / 0.283200 (0.176612) | 0.033872 / 0.141683 (-0.107811) | 1.618576 / 1.452155 (0.166421) | 1.676562 / 1.492716 (0.183846) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230625 / 0.018006 (0.212619) | 0.600558 / 0.000490 (0.600068) | 0.003419 / 0.000200 (0.003219) | 0.000113 / 0.000054 (0.000059) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026916 / 0.037411 (-0.010496) | 0.103003 / 0.014526 (0.088478) | 0.117078 / 0.176557 (-0.059478) | 0.169359 / 0.737135 (-0.567776) | 0.120305 / 0.296338 (-0.176034) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.616877 / 0.215209 (0.401668) | 6.157232 / 2.077655 (4.079577) | 2.869219 / 1.504120 (1.365099) | 2.381410 / 1.541195 (0.840216) | 2.417357 / 1.468490 (0.948867) | 0.914947 / 4.584777 (-3.669830) | 5.718526 / 3.745712 (1.972814) | 2.757253 / 5.269862 (-2.512609) | 1.794122 / 4.565676 (-2.771554) | 0.108423 / 0.424275 (-0.315852) | 0.013378 / 0.007607 (0.005771) | 0.831067 / 0.226044 (0.605023) | 8.478946 / 2.268929 (6.210018) | 3.685937 / 55.444624 (-51.758687) | 2.867472 / 6.876477 (-4.009005) | 2.895975 / 2.142072 (0.753903) | 1.137547 / 4.805227 (-3.667681) | 0.213891 / 6.500664 (-6.286773) | 0.075825 / 0.075469 (0.000356) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.621193 / 1.841788 (-0.220594) | 17.322110 / 8.074308 (9.247802) | 21.804016 / 10.191392 (11.612624) | 0.243692 / 0.680424 (-0.436732) | 0.030331 / 0.534201 (-0.503870) | 0.492186 / 0.579283 (-0.087097) | 0.632583 / 0.434364 (0.198219) | 0.576265 / 0.540337 (0.035927) | 0.713165 / 1.386936 (-0.673771) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008916 / 0.011353 (-0.002437) | 0.004737 / 0.011008 (-0.006271) | 0.134271 / 0.038508 (0.095763) | 0.054472 / 0.023109 (0.031363) | 0.380942 / 0.275898 (0.105044) | 0.474138 / 0.323480 (0.150658) | 0.007917 / 0.007986 (-0.000068) | 0.003748 / 0.004328 (-0.000580) | 0.092765 / 0.004250 (0.088515) | 0.077873 / 0.037052 (0.040821) | 0.397533 / 0.258489 (0.139043) | 0.454737 / 0.293841 (0.160896) | 0.039901 / 0.128546 (-0.088645) | 0.010188 / 0.075646 (-0.065458) | 0.447312 / 0.419271 (0.028040) | 0.068684 / 0.043533 (0.025151) | 0.371554 / 0.255139 (0.116415) | 0.459655 / 0.283200 (0.176455) | 0.027157 / 0.141683 (-0.114526) | 1.874643 / 1.452155 (0.422488) | 2.014800 / 1.492716 (0.522083) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.227079 / 0.018006 (0.209073) | 0.483241 / 0.000490 (0.482751) | 0.012404 / 0.000200 (0.012204) | 0.000409 / 0.000054 (0.000354) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033135 / 0.037411 (-0.004277) | 0.137782 / 0.014526 (0.123257) | 0.142951 / 0.176557 (-0.033605) | 0.209825 / 0.737135 (-0.527311) | 0.152438 / 0.296338 (-0.143900) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.513066 / 0.215209 (0.297857) | 5.122776 / 2.077655 (3.045121) | 2.399270 / 1.504120 (0.895150) | 2.180143 / 1.541195 (0.638949) | 2.286395 / 1.468490 (0.817905) | 0.641866 / 4.584777 (-3.942911) | 4.694922 / 3.745712 (0.949210) | 2.543390 / 5.269862 (-2.726472) | 1.398592 / 4.565676 (-3.167084) | 0.088662 / 0.424275 (-0.335613) | 0.015854 / 0.007607 (0.008247) | 0.688891 / 0.226044 (0.462847) | 6.370148 / 2.268929 (4.101220) | 2.949974 / 55.444624 (-52.494650) | 2.538049 / 6.876477 (-4.338428) | 2.699380 / 2.142072 (0.557308) | 0.792670 / 4.805227 (-4.012557) | 0.169126 / 6.500664 (-6.331538) | 0.078511 / 0.075469 (0.003042) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.609119 / 1.841788 (-0.232669) | 18.785069 / 8.074308 (10.710761) | 16.670783 / 10.191392 (6.479391) | 0.213081 / 0.680424 (-0.467343) | 0.023904 / 0.534201 (-0.510296) | 0.567720 / 0.579283 (-0.011564) | 0.505806 / 0.434364 (0.071442) | 0.649466 / 0.540337 (0.109129) | 0.773174 / 1.386936 (-0.613762) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008036 / 0.011353 (-0.003317) | 0.004808 / 0.011008 (-0.006201) | 0.094316 / 0.038508 (0.055808) | 0.056174 / 0.023109 (0.033065) | 0.481618 / 0.275898 (0.205720) | 0.565300 / 0.323480 (0.241820) | 0.006339 / 0.007986 (-0.001646) | 0.003950 / 0.004328 (-0.000379) | 0.093389 / 0.004250 (0.089139) | 0.076163 / 0.037052 (0.039111) | 0.489013 / 0.258489 (0.230524) | 0.565451 / 0.293841 (0.271611) | 0.039392 / 0.128546 (-0.089155) | 0.010553 / 0.075646 (-0.065093) | 0.101406 / 0.419271 (-0.317865) | 0.062355 / 0.043533 (0.018822) | 0.470461 / 0.255139 (0.215322) | 0.502574 / 0.283200 (0.219375) | 0.030196 / 0.141683 (-0.111486) | 1.893926 / 1.452155 (0.441771) | 1.958902 / 1.492716 (0.466185) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.198074 / 0.018006 (0.180068) | 0.476828 / 0.000490 (0.476338) | 0.003457 / 0.000200 (0.003257) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037576 / 0.037411 (0.000165) | 0.146663 / 0.014526 (0.132138) | 0.152969 / 0.176557 (-0.023588) | 0.218683 / 0.737135 (-0.518452) | 0.161552 / 0.296338 (-0.134786) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.525988 / 0.215209 (0.310779) | 5.234673 / 2.077655 (3.157018) | 2.571668 / 1.504120 (1.067548) | 2.339760 / 1.541195 (0.798565) | 2.422886 / 1.468490 (0.954395) | 0.651537 / 4.584777 (-3.933240) | 4.811148 / 3.745712 (1.065436) | 4.451165 / 5.269862 (-0.818697) | 2.016283 / 4.565676 (-2.549394) | 0.096393 / 0.424275 (-0.327882) | 0.015222 / 0.007607 (0.007615) | 0.739132 / 0.226044 (0.513087) | 6.813327 / 2.268929 (4.544399) | 3.169018 / 55.444624 (-52.275606) | 2.783120 / 6.876477 (-4.093356) | 2.918979 / 2.142072 (0.776907) | 0.797476 / 4.805227 (-4.007751) | 0.171038 / 6.500664 (-6.329626) | 0.079878 / 0.075469 (0.004409) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.595082 / 1.841788 (-0.246705) | 19.685844 / 8.074308 (11.611536) | 17.518989 / 10.191392 (7.327597) | 0.220015 / 0.680424 (-0.460409) | 0.026351 / 0.534201 (-0.507850) | 0.578977 / 0.579283 (-0.000306) | 0.549564 / 0.434364 (0.115200) | 0.667564 / 0.540337 (0.127227) | 0.802121 / 1.386936 (-0.584815) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1471/comments | https://api.github.com/repos/huggingface/datasets/issues/1471/events | https://github.com/huggingface/datasets/pull/1471 | 761,842,512 | MDExOlB1bGxSZXF1ZXN0NTM2NDUyMzcy | 1,471 | Adding the HAREM dataset | [] | closed | false | null | 5 | 2020-12-11T03:21:10Z | 2020-12-22T10:37:33Z | 2020-12-22T10:37:33Z | null | Adding the HAREM dataset, a Portuguese language dataset for NER tasks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1471/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1471/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1471.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1471",
"merged_at": "2020-12-22T10:37:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1471.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1471"
} | true | [
"Thanks for the changes !\r\n\r\nSorry if I wasn't clear about the suggestion of adding the `raw` dataset as well.\r\nBy `raw` I meant the dataset with its original features, i.e. not tokenized to follow the conll format for NER.\r\nThe `raw` dataset has data fields `doc_text`, `doc_id` and `entities`.",
"Alright, @lhoestq, now I understand your suggestion, but the JSON files the script downloads aren't actually the \"raw\" HAREM dataset, the [real raw version](https://www.linguateca.pt/primeiroHAREM/harem/ColeccaoDouradaHAREM.zip) of it is on XML format that needs a lot of preprocessing. Those JSON are just pre-processed versions of the dataset.\r\n\r\nI can make the config of the raw version of the pre-processed dataset, but I'll leave a comment on the dataset summary about those details.",
"Oh I see ! Sorry I thought there really were the raw (i.e. original/unprocessed) files",
"The original xml format doesn't seem very practical to use :/ \r\nWe can ignore that and only keep the default and selective configs then.\r\nCan you revert to the old configs ?\r\n\r\nThanks again for adding this dataset !",
"Hi @lhoestq, I've reverted the changes and put more details on README about the dataset."
] |
https://api.github.com/repos/huggingface/datasets/issues/3696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3696/comments | https://api.github.com/repos/huggingface/datasets/issues/3696/events | https://github.com/huggingface/datasets/pull/3696 | 1,129,764,534 | PR_kwDODunzps4yXXgH | 3,696 | Force unique keys in newsqa dataset | [] | closed | false | null | 0 | 2022-02-10T10:09:19Z | 2022-02-14T08:37:20Z | 2022-02-14T08:37:19Z | null | Currently, it may raise `DuplicatedKeysError`.
Fix #3630. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3696/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3696/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3696.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3696",
"merged_at": "2022-02-14T08:37:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3696.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3696"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1304/comments | https://api.github.com/repos/huggingface/datasets/issues/1304/events | https://github.com/huggingface/datasets/pull/1304 | 759,440,841 | MDExOlB1bGxSZXF1ZXN0NTM0NDQ2Nzcy | 1,304 | adding eitb_parcc | [] | closed | false | null | 0 | 2020-12-08T13:20:54Z | 2020-12-09T18:02:54Z | 2020-12-09T18:02:03Z | null | Adding EiTB-ParCC: Parallel Corpus of Comparable News
http://opus.nlpl.eu/EiTB-ParCC.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1304/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1304/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1304.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1304",
"merged_at": "2020-12-09T18:02:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1304.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1304"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2911/comments | https://api.github.com/repos/huggingface/datasets/issues/2911/events | https://github.com/huggingface/datasets/pull/2911 | 996,202,598 | PR_kwDODunzps4rvW7Y | 2,911 | Fix exception chaining | [] | closed | false | null | 0 | 2021-09-14T16:19:29Z | 2021-09-16T15:04:44Z | 2021-09-16T15:04:44Z | null | Fix exception chaining to avoid tracebacks with message: `During handling of the above exception, another exception occurred:` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2911/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2911/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2911.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2911",
"merged_at": "2021-09-16T15:04:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2911.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2911"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/224/comments | https://api.github.com/repos/huggingface/datasets/issues/224/events | https://github.com/huggingface/datasets/issues/224 | 627,791,693 | MDU6SXNzdWU2Mjc3OTE2OTM= | 224 | [Feature Request/Help] BLEURT model -> PyTorch | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 5 | 2020-05-30T18:30:40Z | 2023-01-19T15:46:58Z | 2021-01-04T09:53:32Z | null | Hi, I am interested in porting google research's new BLEURT learned metric to PyTorch (because I wish to do something experimental with language generation and backpropping through BLEURT). I noticed that you guys don't have it yet so I am partly just asking if you plan to add it (@thomwolf said you want to do so on Twitter).
I had a go of just like manually using the checkpoint that they publish which includes the weights. It seems like the architecture is exactly aligned with the out-of-the-box BertModel in transformers just with a single linear layer on top of the CLS embedding. I loaded all the weights to the PyTorch model but I am not able to get the same numbers as the BLEURT package's python api. Here is my colab notebook where I tried https://colab.research.google.com/drive/1Bfced531EvQP_CpFvxwxNl25Pj6ptylY?usp=sharing . If you have any pointers on what might be going wrong that would be much appreciated!
Thank you muchly! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/224/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/224/timeline | null | completed | null | null | false | [
"Is there any update on this? \r\n\r\nThanks!",
"Hitting this error when using bleurt with PyTorch ...\r\n\r\n```\r\nUnrecognizedFlagError: Unknown command line flag 'f'\r\n```\r\n... and I'm assuming because it was built for TF specifically. Is there a way to use this metric in PyTorch?",
"We currently provide a wrapper on the TensorFlow implementation: https://huggingface.co/metrics/bleurt\r\n\r\nWe have long term plans to better handle model-based metrics, but they probably won't be implemented right away\r\n\r\n@adamwlev it would still be cool to add the BLEURT checkpoints to the transformers repo if you're interested, but that would best be discussed there :) \r\n\r\nclosing for now",
"Hi there. We ran into the same problem this year (converting BLEURT to PyTorch) and thanks to @adamwlev found his colab notebook which didn't work but served as a good starting point. Finally, we **made it work** by doing just two simple conceptual fixes: \r\n\r\n1. Transposing 'kernel' layers instead of 'dense' ones when copying params from the original model;\r\n2. Taking pooler_output as a cls_state in forward function of the BleurtModel class.\r\n\r\nPlus few minor syntactical fixes for the outdated parts. The result is still not exactly the same, but is very close to the expected one (1.0483 vs 1.0474).\r\n\r\nFind the fixed version here (fixes are commented): https://colab.research.google.com/drive/1KsCUkFW45d5_ROSv2aHtXgeBa2Z98r03?usp=sharing \r\n",
"I created a new model based on `transformers` that can load every BLEURT checkpoints released so far. https://github.com/lucadiliello/bleurt-pytorch"
] |
https://api.github.com/repos/huggingface/datasets/issues/1666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1666/comments | https://api.github.com/repos/huggingface/datasets/issues/1666/events | https://github.com/huggingface/datasets/pull/1666 | 776,432,006 | MDExOlB1bGxSZXF1ZXN0NTQ2OTI2MzQw | 1,666 | Add language to dataset card for Makhzan dataset. | [] | closed | false | null | 0 | 2020-12-30T12:25:52Z | 2020-12-30T17:20:35Z | 2020-12-30T17:20:35Z | null | Add language to dataset card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1666/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1666/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1666.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1666",
"merged_at": "2020-12-30T17:20:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1666.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1666"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1512/comments | https://api.github.com/repos/huggingface/datasets/issues/1512/events | https://github.com/huggingface/datasets/pull/1512 | 764,010,722 | MDExOlB1bGxSZXF1ZXN0NTM4Mjc5MzIy | 1,512 | Add Hippocorpus Dataset | [] | closed | false | null | 0 | 2020-12-12T16:17:53Z | 2020-12-13T05:09:08Z | 2020-12-13T05:08:58Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1512/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1512/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1512.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1512",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1512.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1512"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/5758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5758/comments | https://api.github.com/repos/huggingface/datasets/issues/5758/events | https://github.com/huggingface/datasets/pull/5758 | 1,669,920,923 | PR_kwDODunzps5OaY9S | 5,758 | Fixes #5757 | [] | closed | false | null | 5 | 2023-04-16T11:56:01Z | 2023-04-20T15:37:49Z | 2023-04-20T15:30:48Z | null | Fixes the bug #5757 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5758/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5758.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5758",
"merged_at": "2023-04-20T15:30:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5758.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5758"
} | true | [
"The CI can be fixed by merging `main` into your branch. Can you do that before we merge ?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Done.\n\nOn Thu, Apr 20, 2023 at 6:01 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> The CI can be fixed by merging main into your branch. Can you do that\n> before we merge ?\n>\n> —\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/pull/5758#issuecomment-1516488124>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AASS73QPLA735AMN4PFDYRTXCFFTJANCNFSM6AAAAAAXACBUQU>\n> .\n> You are receiving this because you authored the thread.Message ID:\n> ***@***.***>\n>\n",
"Nice thanks !",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007161 / 0.011353 (-0.004192) | 0.005099 / 0.011008 (-0.005909) | 0.099301 / 0.038508 (0.060793) | 0.034144 / 0.023109 (0.011034) | 0.298273 / 0.275898 (0.022375) | 0.329009 / 0.323480 (0.005529) | 0.005486 / 0.007986 (-0.002500) | 0.003887 / 0.004328 (-0.000441) | 0.074769 / 0.004250 (0.070518) | 0.047505 / 0.037052 (0.010453) | 0.306550 / 0.258489 (0.048061) | 0.335380 / 0.293841 (0.041540) | 0.034796 / 0.128546 (-0.093750) | 0.012152 / 0.075646 (-0.063495) | 0.332194 / 0.419271 (-0.087077) | 0.049661 / 0.043533 (0.006128) | 0.296832 / 0.255139 (0.041693) | 0.316417 / 0.283200 (0.033218) | 0.098234 / 0.141683 (-0.043449) | 1.494114 / 1.452155 (0.041959) | 1.566468 / 1.492716 (0.073751) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221309 / 0.018006 (0.203303) | 0.440855 / 0.000490 (0.440365) | 0.003025 / 0.000200 (0.002825) | 0.000076 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026594 / 0.037411 (-0.010817) | 0.110406 / 0.014526 (0.095880) | 0.116117 / 0.176557 (-0.060439) | 0.173502 / 0.737135 (-0.563633) | 0.121988 / 0.296338 (-0.174351) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403307 / 0.215209 (0.188098) | 4.034146 / 2.077655 (1.956492) | 1.852162 / 1.504120 (0.348042) | 1.675643 / 1.541195 (0.134448) | 1.748851 / 1.468490 (0.280360) | 0.703458 / 4.584777 (-3.881319) | 3.809055 / 3.745712 (0.063343) | 2.118060 / 5.269862 (-3.151801) | 1.338394 / 4.565676 (-3.227282) | 0.086319 / 0.424275 (-0.337956) | 0.012195 / 0.007607 (0.004588) | 0.520814 / 0.226044 (0.294769) | 5.201074 / 2.268929 (2.932145) | 2.418384 / 55.444624 (-53.026240) | 2.085496 / 6.876477 (-4.790980) | 2.245638 / 2.142072 (0.103565) | 0.849042 / 4.805227 (-3.956185) | 0.171912 / 6.500664 (-6.328752) | 0.065691 / 0.075469 (-0.009778) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.159985 / 1.841788 (-0.681803) | 14.910867 / 8.074308 (6.836559) | 14.473926 / 10.191392 (4.282534) | 0.181532 / 0.680424 (-0.498891) | 0.017203 / 0.534201 (-0.516998) | 0.420805 / 0.579283 (-0.158479) | 0.426455 / 0.434364 (-0.007909) | 0.497086 / 0.540337 (-0.043251) | 0.593909 / 1.386936 (-0.793027) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007688 / 0.011353 (-0.003665) | 0.005353 / 0.011008 (-0.005656) | 0.076869 / 0.038508 (0.038361) | 0.035030 / 0.023109 (0.011921) | 0.344649 / 0.275898 (0.068751) | 0.387669 / 0.323480 (0.064190) | 0.005913 / 0.007986 (-0.002072) | 0.004107 / 0.004328 (-0.000221) | 0.074111 / 0.004250 (0.069860) | 0.049351 / 0.037052 (0.012299) | 0.346061 / 0.258489 (0.087572) | 0.395499 / 0.293841 (0.101658) | 0.035549 / 0.128546 (-0.092997) | 0.012340 / 0.075646 (-0.063307) | 0.087031 / 0.419271 (-0.332241) | 0.049088 / 0.043533 (0.005556) | 0.342774 / 0.255139 (0.087635) | 0.362037 / 0.283200 (0.078837) | 0.100329 / 0.141683 (-0.041354) | 1.442349 / 1.452155 (-0.009806) | 1.551079 / 1.492716 (0.058363) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228458 / 0.018006 (0.210452) | 0.446190 / 0.000490 (0.445701) | 0.000413 / 0.000200 (0.000213) | 0.000056 / 0.000054 (0.000001) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029884 / 0.037411 (-0.007527) | 0.117527 / 0.014526 (0.103002) | 0.123221 / 0.176557 (-0.053335) | 0.172290 / 0.737135 (-0.564845) | 0.128682 / 0.296338 (-0.167657) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420905 / 0.215209 (0.205696) | 4.199342 / 2.077655 (2.121687) | 2.007327 / 1.504120 (0.503207) | 1.814732 / 1.541195 (0.273537) | 1.893999 / 1.468490 (0.425509) | 0.712259 / 4.584777 (-3.872518) | 3.843402 / 3.745712 (0.097690) | 3.198514 / 5.269862 (-2.071348) | 1.678732 / 4.565676 (-2.886945) | 0.086435 / 0.424275 (-0.337840) | 0.012233 / 0.007607 (0.004626) | 0.526121 / 0.226044 (0.300077) | 5.190578 / 2.268929 (2.921650) | 2.473259 / 55.444624 (-52.971366) | 2.142795 / 6.876477 (-4.733682) | 2.277594 / 2.142072 (0.135521) | 0.846117 / 4.805227 (-3.959110) | 0.169458 / 6.500664 (-6.331206) | 0.065017 / 0.075469 (-0.010452) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.272479 / 1.841788 (-0.569309) | 15.086473 / 8.074308 (7.012165) | 14.659728 / 10.191392 (4.468336) | 0.163915 / 0.680424 (-0.516509) | 0.017561 / 0.534201 (-0.516640) | 0.422074 / 0.579283 (-0.157209) | 0.421963 / 0.434364 (-0.012401) | 0.490321 / 0.540337 (-0.050016) | 0.586854 / 1.386936 (-0.800083) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2151/comments | https://api.github.com/repos/huggingface/datasets/issues/2151/events | https://github.com/huggingface/datasets/pull/2151 | 844,886,081 | MDExOlB1bGxSZXF1ZXN0NjA0MDg5MDMw | 2,151 | Add support for axis in concatenate datasets | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | 5 | 2021-03-30T16:58:44Z | 2021-06-23T17:41:02Z | 2021-04-19T16:07:18Z | null | Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2151/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2151",
"merged_at": "2021-04-19T16:07:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2151"
} | true | [
"@lhoestq I am going to implement the consolidation step you mentioned in #1870.",
"@lhoestq I was thinking that the order of the TableBlocks is not relevant, isn't it?\r\n\r\nI mean, in order to consolidate _consecutive_ in-memory table blocks, in this case:\r\n```\r\nblocks = [in_memory_1, memory_mapped, in_memory_2]\r\n```\r\nI could reorder the list:\r\n```\r\nblocks = [in_memory_1, in_memory_2, memory_mapped]\r\n```\r\nso that the first 2 can be consolidated into a single one:\r\n```\r\nblocks = [in_memory_3, memory_mapped]\r\n```",
"I think the order is important, users won't expect the dataset to be \"shuffled\" when they add a new item",
"> I think the order is important, users won't expect the dataset to be \"shuffled\" when they add a new item\r\n\r\nOK, therefore I leave `_consolidate_blocks` as it is, which currently keeps the order of the blocks (no shuffling).",
"Thank you guys for implementing this. Minor thing I noticed in the [documentation](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.concatenate_datasets): it says \"Converts a list of Dataset with **the same schema** into a single Dataset\". With the addition of the axis parameter, perhaps this should be reworded, no?"
] |
https://api.github.com/repos/huggingface/datasets/issues/828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/828/comments | https://api.github.com/repos/huggingface/datasets/issues/828/events | https://github.com/huggingface/datasets/pull/828 | 740,008,683 | MDExOlB1bGxSZXF1ZXN0NTE4NTcwMjY3 | 828 | Add writer_batch_size attribute to GeneratorBasedBuilder | [] | closed | false | null | 0 | 2020-11-10T15:28:19Z | 2020-11-10T16:27:36Z | 2020-11-10T16:27:36Z | null | As specified in #741 one would need to specify a custom ArrowWriter batch size to avoid filling the RAM. Indeed the defaults buffer size is 10 000 examples but for multimodal datasets that contain images or videos we may want to reduce that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/828/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/828/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/828.diff",
"html_url": "https://github.com/huggingface/datasets/pull/828",
"merged_at": "2020-11-10T16:27:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/828.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/828"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5598 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5598/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5598/comments | https://api.github.com/repos/huggingface/datasets/issues/5598/events | https://github.com/huggingface/datasets/pull/5598 | 1,605,018,478 | PR_kwDODunzps5LCMiX | 5,598 | Fix push_to_hub with no dataset_infos | [] | closed | false | null | 2 | 2023-03-01T13:54:06Z | 2023-03-02T13:47:13Z | 2023-03-02T13:40:17Z | null | As reported in https://github.com/vijaydwivedi75/lrgb/issues/10, `push_to_hub` fails if the remote repository already exists and has a README.md without `dataset_info` in the YAML tags
cc @clefourrier | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5598/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5598/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5598.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5598",
"merged_at": "2023-03-02T13:40:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5598.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5598"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008823 / 0.011353 (-0.002529) | 0.004738 / 0.011008 (-0.006270) | 0.102338 / 0.038508 (0.063830) | 0.030603 / 0.023109 (0.007494) | 0.302995 / 0.275898 (0.027097) | 0.362080 / 0.323480 (0.038600) | 0.007096 / 0.007986 (-0.000889) | 0.003493 / 0.004328 (-0.000835) | 0.079129 / 0.004250 (0.074878) | 0.037966 / 0.037052 (0.000914) | 0.310412 / 0.258489 (0.051923) | 0.346740 / 0.293841 (0.052899) | 0.033795 / 0.128546 (-0.094751) | 0.011595 / 0.075646 (-0.064051) | 0.325189 / 0.419271 (-0.094083) | 0.041679 / 0.043533 (-0.001854) | 0.302339 / 0.255139 (0.047200) | 0.322519 / 0.283200 (0.039319) | 0.089058 / 0.141683 (-0.052625) | 1.496223 / 1.452155 (0.044068) | 1.512562 / 1.492716 (0.019845) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.009298 / 0.018006 (-0.008709) | 0.406726 / 0.000490 (0.406236) | 0.003753 / 0.000200 (0.003553) | 0.000082 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023327 / 0.037411 (-0.014084) | 0.098175 / 0.014526 (0.083649) | 0.106040 / 0.176557 (-0.070516) | 0.151934 / 0.737135 (-0.585201) | 0.108465 / 0.296338 (-0.187873) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419073 / 0.215209 (0.203864) | 4.188012 / 2.077655 (2.110358) | 1.857667 / 1.504120 (0.353547) | 1.664124 / 1.541195 (0.122929) | 1.704341 / 1.468490 (0.235851) | 0.699671 / 4.584777 (-3.885106) | 3.391110 / 3.745712 (-0.354602) | 1.871136 / 5.269862 (-3.398725) | 1.176794 / 4.565676 (-3.388882) | 0.083322 / 0.424275 (-0.340953) | 0.012450 / 0.007607 (0.004843) | 0.525058 / 0.226044 (0.299014) | 5.265425 / 2.268929 (2.996497) | 2.320672 / 55.444624 (-53.123952) | 1.964806 / 6.876477 (-4.911671) | 2.027055 / 2.142072 (-0.115017) | 0.819768 / 4.805227 (-3.985459) | 0.149638 / 6.500664 (-6.351026) | 0.064774 / 0.075469 (-0.010695) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.204575 / 1.841788 (-0.637212) | 13.651878 / 8.074308 (5.577570) | 13.751973 / 10.191392 (3.560581) | 0.154781 / 0.680424 (-0.525643) | 0.028887 / 0.534201 (-0.505314) | 0.404905 / 0.579283 (-0.174379) | 0.411320 / 0.434364 (-0.023043) | 0.485026 / 0.540337 (-0.055311) | 0.579690 / 1.386936 (-0.807246) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006615 / 0.011353 (-0.004737) | 0.004606 / 0.011008 (-0.006402) | 0.076099 / 0.038508 (0.037591) | 0.027247 / 0.023109 (0.004137) | 0.360731 / 0.275898 (0.084833) | 0.393688 / 0.323480 (0.070208) | 0.005079 / 0.007986 (-0.002906) | 0.003345 / 0.004328 (-0.000984) | 0.077184 / 0.004250 (0.072934) | 0.037850 / 0.037052 (0.000797) | 0.379738 / 0.258489 (0.121249) | 0.400474 / 0.293841 (0.106633) | 0.031581 / 0.128546 (-0.096966) | 0.011508 / 0.075646 (-0.064138) | 0.084966 / 0.419271 (-0.334306) | 0.041740 / 0.043533 (-0.001793) | 0.349887 / 0.255139 (0.094748) | 0.384405 / 0.283200 (0.101205) | 0.089022 / 0.141683 (-0.052661) | 1.503448 / 1.452155 (0.051293) | 1.564870 / 1.492716 (0.072154) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.233581 / 0.018006 (0.215574) | 0.413819 / 0.000490 (0.413330) | 0.000398 / 0.000200 (0.000198) | 0.000060 / 0.000054 (0.000006) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024805 / 0.037411 (-0.012607) | 0.101348 / 0.014526 (0.086822) | 0.108701 / 0.176557 (-0.067856) | 0.160011 / 0.737135 (-0.577124) | 0.111696 / 0.296338 (-0.184642) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436303 / 0.215209 (0.221094) | 4.368684 / 2.077655 (2.291029) | 2.082366 / 1.504120 (0.578247) | 1.888108 / 1.541195 (0.346913) | 1.958295 / 1.468490 (0.489804) | 0.700858 / 4.584777 (-3.883919) | 3.408321 / 3.745712 (-0.337391) | 1.872960 / 5.269862 (-3.396902) | 1.165116 / 4.565676 (-3.400560) | 0.083556 / 0.424275 (-0.340719) | 0.012348 / 0.007607 (0.004741) | 0.536551 / 0.226044 (0.310506) | 5.359974 / 2.268929 (3.091045) | 2.539043 / 55.444624 (-52.905581) | 2.200314 / 6.876477 (-4.676162) | 2.222051 / 2.142072 (0.079979) | 0.808567 / 4.805227 (-3.996661) | 0.151222 / 6.500664 (-6.349442) | 0.066351 / 0.075469 (-0.009118) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.265502 / 1.841788 (-0.576286) | 13.692066 / 8.074308 (5.617758) | 13.124507 / 10.191392 (2.933115) | 0.129545 / 0.680424 (-0.550879) | 0.016827 / 0.534201 (-0.517374) | 0.380326 / 0.579283 (-0.198957) | 0.387268 / 0.434364 (-0.047096) | 0.463722 / 0.540337 (-0.076616) | 0.553681 / 1.386936 (-0.833255) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1253/comments | https://api.github.com/repos/huggingface/datasets/issues/1253/events | https://github.com/huggingface/datasets/pull/1253 | 758,517,391 | MDExOlB1bGxSZXF1ZXN0NTMzNjc4MDE1 | 1,253 | add thainer | [] | closed | false | null | 0 | 2020-12-07T13:41:54Z | 2020-12-08T14:44:49Z | 2020-12-08T14:44:49Z | null | ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence
[unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by
[Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/).
It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp).
The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/))
for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/).
The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`.
[@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1253/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1253/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1253",
"merged_at": "2020-12-08T14:44:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1253"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1919/comments | https://api.github.com/repos/huggingface/datasets/issues/1919/events | https://github.com/huggingface/datasets/issues/1919 | 812,626,872 | MDU6SXNzdWU4MTI2MjY4NzI= | 1,919 | Failure to save with save_to_disk | [] | closed | false | null | 2 | 2021-02-20T14:18:10Z | 2021-03-03T17:40:27Z | 2021-03-03T17:40:27Z | null | When I try to save a dataset locally using the `save_to_disk` method I get the error:
```bash
FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow'
```
To replicate:
1. Install `datasets` from master
2. Run this code:
```python
from datasets import load_dataset
squad = load_dataset("squad") # or any other dataset
squad.save_to_disk("squad") # error here
```
The problem is that the method is not creating a directory with the name `dataset_path` for saving the dataset in (i.e. it's not creating the *train* and *validation* directories in this case). After creating the directory the problem resolves.
I'll open a PR soon doing that and linking this issue.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1919/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1919/timeline | null | completed | null | null | false | [
"Hi thanks for reporting and for proposing a fix :)\r\n\r\nI just merged a fix, feel free to try it from the master branch !",
"Closing since this has been fixed by #1923"
] |
https://api.github.com/repos/huggingface/datasets/issues/2545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2545/comments | https://api.github.com/repos/huggingface/datasets/issues/2545/events | https://github.com/huggingface/datasets/pull/2545 | 929,016,580 | MDExOlB1bGxSZXF1ZXN0Njc2OTMxOTYw | 2,545 | Fix DuplicatedKeysError in drop dataset | [] | closed | false | null | 0 | 2021-06-24T09:10:39Z | 2021-06-24T14:57:08Z | 2021-06-24T14:57:08Z | null | Close #2542.
cc: @VictorSanh. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2545/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2545/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2545.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2545",
"merged_at": "2021-06-24T14:57:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2545.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2545"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/88 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/88/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/88/comments | https://api.github.com/repos/huggingface/datasets/issues/88/events | https://github.com/huggingface/datasets/pull/88 | 617,284,664 | MDExOlB1bGxSZXF1ZXN0NDE3MjI5ODQw | 88 | Add wiki40b | [] | closed | false | null | 1 | 2020-05-13T09:16:01Z | 2020-05-13T12:31:55Z | 2020-05-13T12:31:54Z | null | This one is a beam dataset that downloads files using tensorflow.
I tested it on a small config and it works fine | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/88/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/88/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/88.diff",
"html_url": "https://github.com/huggingface/datasets/pull/88",
"merged_at": "2020-05-13T12:31:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/88.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/88"
} | true | [
"Looks good to me. I have not really looked too much into the Beam Datasets yet though - so I think you can merge whenever you think is good for Beam datasets :-) "
] |
https://api.github.com/repos/huggingface/datasets/issues/1290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1290/comments | https://api.github.com/repos/huggingface/datasets/issues/1290/events | https://github.com/huggingface/datasets/issues/1290 | 759,339,989 | MDU6SXNzdWU3NTkzMzk5ODk= | 1,290 | imdb dataset cannot be downloaded | [] | closed | false | null | 3 | 2020-12-08T10:47:36Z | 2020-12-24T17:38:09Z | 2020-12-24T17:38:09Z | null | hi
please find error below getting imdb train spli:
thanks
`
datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")`
errors
```
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3...
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=7486451, num_examples=5628, dataset_name='imdb')}]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1290/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1290/timeline | null | completed | null | null | false | [
"Hi @rabeehk , I am unable to reproduce your problem locally.\r\nCan you try emptying the cache (removing the content of `/idiap/temp/rkarimi/cache_home_1/datasets`) and retry ?",
"Hi,\r\nthanks, I did remove the cache and still the same error here\r\n\r\n```\r\n>>> a = datasets.load_dataset(\"imdb\", split=\"train\")\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\nDownloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3...\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets\r\ncahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads\r\nTraceback (most recent call last): \r\n File \"<stdin>\", line 1, in <module>\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py\", line 611, in load_dataset\r\n ignore_verifications=ignore_verifications,\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 476, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py\", line 558, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 73, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=4902716, num_examples=3680, dataset_name='imdb')}]\r\n```\r\n\r\ndatasets version\r\n```\r\ndatasets 1.1.2 <pip>\r\ntensorflow-datasets 4.1.0 <pip>\r\n\r\n```",
"resolved with moving to version 1.1.3"
] |
https://api.github.com/repos/huggingface/datasets/issues/2753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2753/comments | https://api.github.com/repos/huggingface/datasets/issues/2753/events | https://github.com/huggingface/datasets/pull/2753 | 959,036,995 | MDExOlB1bGxSZXF1ZXN0NzAyMjEyMjMz | 2,753 | Generate metadata JSON for reclor dataset | [] | closed | false | null | 0 | 2021-08-03T11:52:29Z | 2021-08-04T08:07:15Z | 2021-08-04T08:07:15Z | null | Related to #2743. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2753/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2753/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2753.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2753",
"merged_at": "2021-08-04T08:07:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2753.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2753"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2781/comments | https://api.github.com/repos/huggingface/datasets/issues/2781/events | https://github.com/huggingface/datasets/issues/2781 | 964,805,351 | MDU6SXNzdWU5NjQ4MDUzNTE= | 2,781 | Latest v2.0.0 release of sacrebleu has broken some metrics | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-08-10T09:59:41Z | 2021-08-10T11:16:07Z | 2021-08-10T11:16:07Z | null | ## Describe the bug
After `sacrebleu` v2.0.0 release (see changes here: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15), some of `datasets` metrics are broken:
- Default tokenizer `sacrebleu.DEFAULT_TOKENIZER` no longer exists:
- #2739
- #2778
- Bleu tokenizers are no longer accessible with `sacrebleu.TOKENIZERS`:
- #2779
- `corpus_bleu` args have been renamed from `(sys_stream, ref_streams)` to `(hipotheses, references)`:
- #2782 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2781/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2781/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3340/comments | https://api.github.com/repos/huggingface/datasets/issues/3340/events | https://github.com/huggingface/datasets/pull/3340 | 1,067,292,636 | PR_kwDODunzps4vMP6Z | 3,340 | Fix JSON ClassLabel casting for integers | [] | closed | false | null | 0 | 2021-11-30T14:19:54Z | 2021-12-01T11:27:30Z | 2021-12-01T11:27:30Z | null | Loading a JSON dataset with ClassLabel feature types currently fails if the JSON data already has integers. Indeed currently it tries to convert the strings to integers without even checking if the data are not integers already.
For example this currently fails:
```python
from datasets import load_dataset, Features, ClassLabel
path = "data.json"
f = Features({"a": ClassLabel(names=["neg", "pos"])})
d = load_dataset("json", data_files=path, features=f)
```
data.json
```json
{"a": 0}
{"a": 1}
```
I fixed that by adding a line that checks the type of the JSON data before trying to convert them
cc @albertvillanova let me know if it sounds good to you | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3340/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3340/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3340.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3340",
"merged_at": "2021-12-01T11:27:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3340.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3340"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4894/comments | https://api.github.com/repos/huggingface/datasets/issues/4894/events | https://github.com/huggingface/datasets/pull/4894 | 1,350,667,270 | PR_kwDODunzps49yIvr | 4,894 | Add citation information to makhzan dataset | [] | closed | false | null | 1 | 2022-08-25T10:16:40Z | 2022-08-30T06:21:54Z | 2022-08-25T13:19:41Z | null | This PR adds the citation information to `makhzan` dataset, once they have replied to our request for that information:
- https://github.com/zeerakahmed/makhzan/issues/43 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4894/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4894/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4894.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4894",
"merged_at": "2022-08-25T13:19:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4894.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4894"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2364/comments | https://api.github.com/repos/huggingface/datasets/issues/2364/events | https://github.com/huggingface/datasets/pull/2364 | 892,420,500 | MDExOlB1bGxSZXF1ZXN0NjQ1MTI4MDYx | 2,364 | README updated for SNLI, MNLI | [] | closed | false | null | 2 | 2021-05-15T11:37:59Z | 2021-05-17T14:14:27Z | 2021-05-17T13:34:19Z | null | Closes #2275. Mentioned about -1 labels in MNLI, SNLI and how they should be removed before training. @lhoestq `check_code_quality` test might fail for MNLI as the license name `other-Open Portion of the American National Corpus` is not a registered tag for 'licenses' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2364/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2364/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2364.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2364",
"merged_at": "2021-05-17T13:34:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2364.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2364"
} | true | [
"Regarding the license issue, I think we should allow it since it starts with `other-`. Cc @gchhablani what do you think ?",
"@lhoestq I agree, I'll look into it."
] |
https://api.github.com/repos/huggingface/datasets/issues/3126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3126/comments | https://api.github.com/repos/huggingface/datasets/issues/3126/events | https://github.com/huggingface/datasets/issues/3126 | 1,032,093,055 | I_kwDODunzps49hH1_ | 3,126 | "arabic_billion_words" dataset does not create the full dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-10-21T06:02:38Z | 2021-10-22T13:28:40Z | 2021-10-22T13:28:40Z | null | ## Describe the bug
When running:
raw_dataset = load_dataset('arabic_billion_words','Alittihad')
the correct dataset file is pulled from the url.
But, the generated dataset includes just a small portion of the data included in the file.
This is true for all other portions of the "arabic_billion_words" dataset ('Almasryalyoum',.....)
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
raw_dataset = load_dataset('arabic_billion_words','Alittihad')
#The screen message
Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 20.62 MiB, post-processed: Unknown size, total: 352.74 MiB)
## Expected results
over 100K sentences
## Actual results
only 11K sentences
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.0
- Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3126/timeline | null | completed | null | null | false | [
"Thanks for reporting, @vitalyshalumov.\r\n\r\nApparently the script to parse the data has a bug, and does not generate the entire dataset.\r\n\r\nI'm fixing it."
] |
https://api.github.com/repos/huggingface/datasets/issues/5923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5923/comments | https://api.github.com/repos/huggingface/datasets/issues/5923/events | https://github.com/huggingface/datasets/issues/5923 | 1,737,436,227 | I_kwDODunzps5njyxD | 5,923 | Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility | [] | open | false | null | 13 | 2023-06-02T04:16:32Z | 2023-07-23T20:39:59Z | null | null | ### Describe the bug
When trying to import datasets, I get a pyarrow ValueError:
Traceback (most recent call last):
File "/Users/edward/test/test.py", line 1, in <module>
import datasets
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 65, in <module>
from .arrow_reader import ArrowReader
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_reader.py", line 28, in <module>
import pyarrow.parquet as pq
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/__init__.py", line 20, in <module>
from .core import *
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 45, in <module>
from pyarrow.fs import (LocalFileSystem, FileSystem, FileType,
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/fs.py", line 49, in <module>
from pyarrow._gcsfs import GcsFileSystem # noqa
File "pyarrow/_gcsfs.pyx", line 1, in init pyarrow._gcsfs
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
`import datasets`
### Expected behavior
Successful import
### Environment info
Conda environment, MacOS
python 3.9.12
datasets 2.12.0
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5923/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5923/timeline | null | null | null | null | false | [
"Based on https://github.com/rapidsai/cudf/issues/10187, this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n\r\nCan you please execute the following commands in the terminal and paste the output here?\r\n```\r\nconda list | grep arrow\r\n``` \r\n```\r\npython -c \"import pyarrow; print(pyarrow.__file__)\"\r\n```\r\n\r\n\r\n",
"> Based on [rapidsai/cudf#10187](https://github.com/rapidsai/cudf/issues/10187), this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n> \r\n> Can you please execute the following commands in the terminal and paste the output here?\r\n> \r\n> ```\r\n> conda list | grep arrow\r\n> ```\r\n> \r\n> ```\r\n> python -c \"import pyarrow; print(pyarrow.__file__)\"\r\n> ```\r\n\r\n\r\nHere is the output to the first command:\r\n```\r\narrow-cpp 11.0.0 py39h7f74497_0 \r\npyarrow 12.0.0 pypi_0 pypi\r\n```\r\nand the second:\r\n```\r\n/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/__init__.py\r\n```\r\nThanks!\r\n\r\n\r\n\r\n",
"after installing pytesseract 0.3.10, I got the above error. FYI ",
"RuntimeError: Failed to import transformers.trainer because of the following error (look up to see its traceback):\r\npyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject",
"I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n\r\nDo we need to update dependencies? ",
"Please note that our CI properly passes all tests with `pyarrow-12.0.0`, for Python 3.7 and Python 3.10, for Ubuntu and Windows: see for example https://github.com/huggingface/datasets/actions/runs/5157324334/jobs/9289582291",
"For conda with python3.8.16 this solved my problem! thanks!\r\n\r\n> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies? I can work on that if no one else is working on it.\r\n\r\n",
"Thanks for replying. I am not sure about those environments but it seems like pyarrow-12.0.0 does not work for conda with python 3.8.16. \r\n\r\n> Please note that our CI properly passes all tests with `pyarrow-12.0.0`, for Python 3.7 and Python 3.10, for Ubuntu and Windows: see for example https://github.com/huggingface/datasets/actions/runs/5157324334/jobs/9289582291\r\n\r\n",
"Got the same error with:\r\n\r\n```\r\narrow-cpp 11.0.0 py310h7516544_0 \r\npyarrow 12.0.0 pypi_0 pypi\r\n\r\npython 3.10.11 h7a1cb2a_2 \r\n\r\ndatasets 2.13.0 pyhd8ed1ab_0 conda-forge\r\n```",
"> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies?\r\n\r\nThis solved the issue for me as well.",
"> I got the same error, pyarrow 12.0.0 released May/2023 (https://pypi.org/project/pyarrow/) is not compatible, running `pip install pyarrow==11.0.0` to force install the previous version solved the problem.\r\n> \r\n> Do we need to update dependencies?\r\n\r\nSolved it for me also",
"> 基于 [rapidsai/cudf#10187](https://github.com/rapidsai/cudf/issues/10187),这可能意味着您的安装与 不兼容。`pyarrow``datasets`\r\n> \r\n> 您能否在终端中执行以下命令并将输出粘贴到此处?\r\n> \r\n> ```\r\n> conda list | grep arrow\r\n> ```\r\n> \r\n> ```\r\n> python -c \"import pyarrow; print(pyarrow.__file__)\"\r\n> ```\r\n\r\narrow-cpp 11.0.0 py310h7516544_0 \r\npyarrow 12.0.1 pypi_0 pypi\r\n\r\n/root/miniconda3/lib/python3.10/site-packages/pyarrow/__init__.py",
"Got the same problem with\r\n\r\narrow-cpp 11.0.0 py310h1fc3239_0 \r\npyarrow 12.0.1 pypi_0 pypi\r\n\r\nminiforge3/envs/mlp/lib/python3.10/site-packages/pyarrow/__init__.py\r\n\r\nReverting back to pyarrow 11 solved the problem.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5715/comments | https://api.github.com/repos/huggingface/datasets/issues/5715/events | https://github.com/huggingface/datasets/issues/5715 | 1,657,479,788 | I_kwDODunzps5iyyJs | 5,715 | Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 1 | 2023-04-06T13:57:48Z | 2023-04-20T17:16:26Z | 2023-04-20T17:16:26Z | null | ### Feature request
There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader:
Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict".
https://github.com/pytorch/pytorch/issues/13246
With huggingface datasets, unfortunately, the default return type is the list, so the problem is raised too often if we do not set anything for the issue.
However, this issue can be released when the returning output is fixed in length.
Therefore, I request the mode, returning outputs with fixed length (e.g. numpy array) rather than list.
The design would be good when we load datasets as
```python
load_dataset(..., with_return_as_fixed_tensor=True)
```
### Motivation
The general solution for this issue is already in the comments: https://github.com/pytorch/pytorch/issues/13246#issuecomment-905703662
: Numpy or Pandas seems not to have problems, while both have the string type.
(I'm not sure that the sequence of huggingface datasets can solve this problem as well)
### Your contribution
I'll read it ! thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5715/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5715/timeline | null | completed | null | null | false | [
"Hi! \r\n\r\nYou can use [`.set_format(\"np\")`](https://huggingface.co/docs/datasets/process#format) to get NumPy arrays (or Pytorch tensors with `.set_format(\"torch\")`) in `__getitem__`.\r\n\r\nAlso, have you been able to reproduce the linked PyTorch issue with a HF dataset?\r\n "
] |
https://api.github.com/repos/huggingface/datasets/issues/5767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5767/comments | https://api.github.com/repos/huggingface/datasets/issues/5767/events | https://github.com/huggingface/datasets/issues/5767 | 1,672,433,979 | I_kwDODunzps5jr1E7 | 5,767 | How to use Distill-BERT with different datasets? | [] | closed | false | null | 1 | 2023-04-18T06:25:12Z | 2023-04-20T16:52:05Z | 2023-04-20T16:52:05Z | null | ### Describe the bug
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Steps to reproduce the bug
I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)?
### Expected behavior
Distill-BERT should work with different datasets.
### Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 11.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5767/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5767/timeline | null | completed | null | null | false | [
"Closing this one in favor of the same issue opened in the `transformers` repo."
] |
https://api.github.com/repos/huggingface/datasets/issues/4126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4126/comments | https://api.github.com/repos/huggingface/datasets/issues/4126/events | https://github.com/huggingface/datasets/issues/4126 | 1,196,665,194 | I_kwDODunzps5HU6lq | 4,126 | dataset viewer issue for common_voice | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
},
{
"color": "F83ACF",
"default": false,
"description": "",
"id": 4027368468,
"name": "audio_column",
"node_id": "LA_kwDODunzps7wDMQU",
"url": "https://api.github.com/repos/huggingface/datasets/labels/audio_column"
}
] | closed | false | null | 2 | 2022-04-07T23:34:28Z | 2022-04-25T13:42:17Z | 2022-04-25T13:42:16Z | null | ## Dataset viewer issue for 'common_voice'
**Link:** https://huggingface.co/datasets/common_voice
Server Error
Status code: 400
Exception: TypeError
Message: __init__() got an unexpected keyword argument 'audio_column'
Am I the one who added this dataset ? No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4126/timeline | null | completed | null | null | false | [
"Yes, it's a known issue, and we expect to fix it soon.",
"Fixed.\r\n\r\n<img width=\"1393\" alt=\"Capture d’écran 2022-04-25 à 15 42 05\" src=\"https://user-images.githubusercontent.com/1676121/165101176-d729d85b-efff-45a8-bad1-b69223edba5f.png\">\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4022/comments | https://api.github.com/repos/huggingface/datasets/issues/4022/events | https://github.com/huggingface/datasets/pull/4022 | 1,180,816,682 | PR_kwDODunzps41BNeA | 4,022 | Replace dbpedia_14 data url | [] | closed | false | null | 1 | 2022-03-25T13:47:21Z | 2022-03-25T15:03:37Z | 2022-03-25T14:58:49Z | null | I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4022/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4022/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4022.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4022",
"merged_at": "2022-03-25T14:58:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4022.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4022"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3925/comments | https://api.github.com/repos/huggingface/datasets/issues/3925/events | https://github.com/huggingface/datasets/pull/3925 | 1,169,913,769 | PR_kwDODunzps40eaq8 | 3,925 | Fix main_classes docs index | [] | closed | false | null | 3 | 2022-03-15T16:33:46Z | 2022-03-22T13:49:11Z | 2022-03-22T13:44:04Z | null | Currently the `main_classes` documentation has a wrong index. I believe this comes from issues in the examples of the Translation feature types

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3925/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3925/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3925.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3925",
"merged_at": "2022-03-22T13:44:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3925.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3925"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hmm it's still not good \r\n\r\n\r\nany idea what could cause this ?",
"Ok fixed :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4824/comments | https://api.github.com/repos/huggingface/datasets/issues/4824/events | https://github.com/huggingface/datasets/pull/4824 | 1,335,826,639 | PR_kwDODunzps49BR5H | 4,824 | Fix titles in dataset cards | [] | closed | false | null | 2 | 2022-08-11T11:27:48Z | 2022-08-11T13:46:11Z | 2022-08-11T12:56:49Z | null | Fix all the titles in the dataset cards, so that they conform to the required format. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4824/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4824/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4824.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4824",
"merged_at": "2022-08-11T12:56:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4824.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4824"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] |
https://api.github.com/repos/huggingface/datasets/issues/2431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2431/comments | https://api.github.com/repos/huggingface/datasets/issues/2431/events | https://github.com/huggingface/datasets/issues/2431 | 907,413,691 | MDU6SXNzdWU5MDc0MTM2OTE= | 2,431 | DuplicatedKeysError when trying to load adversarial_qa | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-05-31T12:11:19Z | 2021-06-01T08:54:03Z | 2021-06-01T08:52:11Z | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
dataset = load_dataset('adversarial_qa', 'adversarialQA')
```
## Expected results
The dataset should be loaded into memory
## Actual results
>DuplicatedKeysError: FAILURE TO GENERATE DATASET !
>Found duplicate Key: 4d3cb5677211ee32895ca9c66dad04d7152254d4
>Keys should be unique and deterministic in nature
>
>
>During handling of the above exception, another exception occurred:
>
>DuplicatedKeysError Traceback (most recent call last)
>
>/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
> 347 for hash, key in self.hkey_record:
> 348 if hash in tmp_record:
>--> 349 raise DuplicatedKeysError(key)
> 350 else:
> 351 tmp_record.add(hash)
>
>DuplicatedKeysError: FAILURE TO GENERATE DATASET !
>Found duplicate Key: 4d3cb5677211ee32895ca9c66dad04d7152254d4
>Keys should be unique and deterministic in nature
## Environment info
- `datasets` version: 1.7.0
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2431/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2431/timeline | null | completed | null | null | false | [
"Thanks for reporting !\r\n#2433 fixed the issue, thanks @mariosasko :)\r\n\r\nWe'll do a patch release soon of the library.\r\nIn the meantime, you can use the fixed version of adversarial_qa by adding `script_version=\"master\"` in `load_dataset`"
] |
https://api.github.com/repos/huggingface/datasets/issues/1086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1086/comments | https://api.github.com/repos/huggingface/datasets/issues/1086/events | https://github.com/huggingface/datasets/pull/1086 | 756,720,643 | MDExOlB1bGxSZXF1ZXN0NTMyMjIzNDEy | 1,086 | adding cdt dataset | [] | closed | false | null | 2 | 2020-12-04T01:28:11Z | 2020-12-04T15:04:02Z | 2020-12-04T15:04:02Z | null | - **Name:** *Cyberbullying Detection Task*
- **Description:** *The Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content.*
- **Data:** *https://github.com/ptaszynski/cyberbullying-Polish*
- **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.* | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1086/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1086.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1086",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1086.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1086"
} | true | [
"> Thanks for adding this one !\r\n> \r\n> I left a few comments\r\n> \r\n> after the change you'll need to regenerate the dataset_infos.json file as well\r\n\r\ndataset_infos.json regenerated",
"looks like this PR includes changes to many files other that the ones for CDT\r\ncould you create another branch and another PR please ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2/comments | https://api.github.com/repos/huggingface/datasets/issues/2/events | https://github.com/huggingface/datasets/issues/2 | 599,767,671 | MDU6SXNzdWU1OTk3Njc2NzE= | 2 | Issue to read a local dataset | [] | closed | false | null | 5 | 2020-04-14T18:18:51Z | 2020-05-11T18:55:23Z | 2020-05-11T18:55:22Z | null | Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):
def __init__(self, **kwargs):
super(BbcConfig, self).__init__(**kwargs)
class Bbc(nlp.GeneratorBasedBuilder):
_DIR = "./data"
_DEV_FILE = "test.csv"
_TRAINING_FILE = "train.csv"
BUILDER_CONFIGS = [BbcConfig(name="bbc", version=nlp.Version("1.0.0"))]
def _info(self):
return nlp.DatasetInfo(builder=self, features=nlp.features.FeaturesDict({"id": nlp.string, "text": nlp.string, "label": nlp.string}))
def _split_generators(self, dl_manager):
files = {"train": os.path.join(self._DIR, self._TRAINING_FILE), "dev": os.path.join(self._DIR, self._DEV_FILE)}
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": files["train"]}),
nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": files["dev"]})]
def _generate_examples(self, filepath):
with open(filepath) as f:
reader = csv.reader(f, delimiter=',', quotechar="\"")
lines = list(reader)[1:]
for idx, line in enumerate(lines):
yield idx, {"idx": idx, "text": line[1], "label": line[0]}
```
The dataset is attached to this issue as well:
[data.zip](https://github.com/huggingface/datasets/files/4476928/data.zip)
Now the steps to reproduce what I would like to do:
1. unzip data locally (I know the nlp lib can detect and extract archives but I want to reduce and facilitate the reproduction as much as possible)
2. create the `bbc.py` script as above at the same location than the unziped `data` folder.
Now I try to load the dataset in three different ways and none works, the first one with the name of the dataset like I would do with TFDS:
```python
import nlp
from bbc import Bbc
dataset = nlp.load("bbc")
```
I get:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load
dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder
builder_cls = load_dataset(path, name=name, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 88, in load_dataset
local_files_only=local_files_only,
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/utils/file_utils.py", line 214, in cached_path
if not is_zipfile(output_path) and not tarfile.is_tarfile(output_path):
File "/opt/anaconda3/envs/transformers/lib/python3.7/zipfile.py", line 203, in is_zipfile
with open(filename, "rb") as fp:
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
But @thomwolf told me that no need to import the script, just put the path of it, then I tried three different way to do:
```python
import nlp
dataset = nlp.load("bbc.py")
```
And
```python
import nlp
dataset = nlp.load("./bbc.py")
```
And
```python
import nlp
dataset = nlp.load("/absolute/path/to/bbc.py")
```
These three ways gives me:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load
dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder
builder_cls = load_dataset(path, name=name, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 124, in load_dataset
dataset_module = importlib.import_module(module_path)
File "/opt/anaconda3/envs/transformers/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'nlp.datasets.2fd72627d92c328b3e9c4a3bf7ec932c48083caca09230cebe4c618da6e93688.bbc'
```
Any idea of what I'm missing? or I might have spot a bug :) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2/timeline | null | completed | null | null | false | [
"My first bug report ❤️\r\nLooking into this right now!",
"Ok, there are some news, most good than bad :laughing: \r\n\r\nThe dataset script now became:\r\n```python\r\nimport csv\r\n\r\nimport nlp\r\n\r\n\r\nclass Bbc(nlp.GeneratorBasedBuilder):\r\n VERSION = nlp.Version(\"1.0.0\")\r\n\r\n def __init__(self, **config):\r\n self.train = config.pop(\"train\", None)\r\n self.validation = config.pop(\"validation\", None)\r\n super(Bbc, self).__init__(**config)\r\n\r\n def _info(self):\r\n return nlp.DatasetInfo(builder=self, description=\"bla\", features=nlp.features.FeaturesDict({\"id\": nlp.int32, \"text\": nlp.string, \"label\": nlp.string}))\r\n\r\n def _split_generators(self, dl_manager):\r\n return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={\"filepath\": self.train}),\r\n nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={\"filepath\": self.validation})]\r\n\r\n def _generate_examples(self, filepath):\r\n with open(filepath) as f:\r\n reader = csv.reader(f, delimiter=',', quotechar=\"\\\"\")\r\n lines = list(reader)[1:]\r\n\r\n for idx, line in enumerate(lines):\r\n yield idx, {\"id\": idx, \"text\": line[1], \"label\": line[0]}\r\n\r\n```\r\n\r\nAnd the dataset folder becomes:\r\n```\r\n.\r\n├── bbc\r\n│ ├── bbc.py\r\n│ └── data\r\n│ ├── test.csv\r\n│ └── train.csv\r\n```\r\nI can load the dataset by using the keywords arguments like this:\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"bbc\", builder_kwargs={\"train\": \"bbc/data/train.csv\", \"validation\": \"bbc/data/test.csv\"})\r\n```\r\n\r\nThat was the good part ^^ Because it took me some time to understand that the script itself is put in cache in `datasets/src/nlp/datasets/some-hash/bbc.py` which is very difficult to discover without checking the source code. It means that doesn't matter the changes you do to your original script it is taken into account. I think instead of doing a hash on the name (I suppose it is the name), a hash on the content of the script itself should be a better solution.\r\n\r\nThen by diving a bit in the code I found the `force_reload` parameter [here](https://github.com/huggingface/datasets/blob/master/src/nlp/load.py#L50) but the call of this `load_dataset` method is done with the `builder_kwargs` as seen [here](https://github.com/huggingface/datasets/blob/master/src/nlp/load.py#L166) which is ok until the call to the builder is done as the builder do not have this `force_reload` parameter. To show as example, the previous load becomes:\r\n```python\r\nimport nlp\r\ndataset = nlp.load(\"bbc\", builder_kwargs={\"train\": \"bbc/data/train.csv\", \"validation\": \"bbc/data/test.csv\", \"force_reload\": True})\r\n```\r\nRaises\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/jplu/dev/jplu/datasets/src/nlp/load.py\", line 283, in load\r\n dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)\r\n File \"/home/jplu/dev/jplu/datasets/src/nlp/load.py\", line 170, in builder\r\n builder_instance = builder_cls(**builder_kwargs)\r\n File \"/home/jplu/dev/jplu/datasets/src/nlp/datasets/84d638d2a8ca919d1021a554e741766f50679dc6553d5a0612b6094311babd39/bbc.py\", line 12, in __init__\r\n super(Bbc, self).__init__(**config)\r\nTypeError: __init__() got an unexpected keyword argument 'force_reload'\r\n```\r\nSo yes the cache is refreshed with the new script but then raises this error.",
"Ok great, so as discussed today, let's:\r\n- have a main dataset directory inside the lib with sub-directories hashed by the content of the file\r\n- keep a cache for downloading the scripts from S3 for now\r\n- later: add methods to list and clean the local versions of the datasets (and the distant versions on S3 as well)\r\n\r\nSide question: do you often use `builder_kwargs` for other things than supplying file paths? I was thinking about having a more easy to read and remember `data_files` argument maybe.",
"Good plan!\r\n\r\nYes I do use `builder_kwargs` for other things such as:\r\n- dataset name\r\n- properties to know how to properly read a CSV file: do I have to skip the first line in a CSV, which delimiter is used, and the columns ids to use.\r\n- properties to know how to properly read a JSON file: which properties in a JSON object to read",
"Done!"
] |
https://api.github.com/repos/huggingface/datasets/issues/1057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1057/comments | https://api.github.com/repos/huggingface/datasets/issues/1057/events | https://github.com/huggingface/datasets/pull/1057 | 756,331,419 | MDExOlB1bGxSZXF1ZXN0NTMxODkzMjE4 | 1,057 | Adding TamilMixSentiment | [] | closed | false | null | 1 | 2020-12-03T16:04:25Z | 2020-12-04T10:09:34Z | 2020-12-04T10:09:12Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1057/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1057/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1057.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1057",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1057.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1057"
} | true | [
"looks like this pr incldues changes about many other files than the ones for tamilMixSentiment, could you create another branch and another PR ?"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/4307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4307/comments | https://api.github.com/repos/huggingface/datasets/issues/4307/events | https://github.com/huggingface/datasets/pull/4307 | 1,231,175,639 | PR_kwDODunzps43k-Wo | 4,307 | Add packaged builder configs to the documentation | [] | closed | false | null | 1 | 2022-05-10T13:34:19Z | 2022-05-10T14:03:50Z | 2022-05-10T13:55:54Z | null | Add the packaged builders configurations to the docs reference is useful to show the list of all parameters one can use when loading data in many formats: CSV, JSON, etc. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4307/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4307/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4307.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4307",
"merged_at": "2022-05-10T13:55:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4307.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4307"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4537/comments | https://api.github.com/repos/huggingface/datasets/issues/4537/events | https://github.com/huggingface/datasets/pull/4537 | 1,279,144,310 | PR_kwDODunzps46ESJn | 4,537 | Fix WMT dataset loading issue and docs update | [] | closed | false | null | 2 | 2022-06-21T21:48:02Z | 2022-06-24T07:05:43Z | 2022-06-24T07:05:10Z | null | This PR is a fix for #4354
Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets.
As I am on a M1 Mac, I am not able to create a virtual `dev` environment using `pip install -e ".[dev]"`. Issue is with `tensorflow-text` not supported on M1s and there is no supporting repo by Apple or Google. So, if I was needed to perform local testing, I am not able to do that.
Let me know, if any additional changes are required.
Thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4537/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4537/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4537.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4537",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4537.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4537"
} | true | [
"The PR branch now has some commits unrelated to the changes, probably due to rebasing. Can you please close this PR and open a new one from a new branch? You can use `git cherry-pick` to preserve the relevant changes:\r\n```bash\r\ngit checkout master\r\ngit remote add upstream [email protected]:huggingface/datasets.git\r\ngit pull --ff-only upstream master\r\ngit checkout -b wmt-datasets-fix2\r\ngit cherry-pick f2d6c995d5153131168f64fc60fe33a7813739a4 a9fdead5f435aeb88c237600be28eb8d4fde4c55\r\n```",
"Closing this PR due to unwanted commit changes. Will be opening new PR for the same issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/1776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1776/comments | https://api.github.com/repos/huggingface/datasets/issues/1776/events | https://github.com/huggingface/datasets/issues/1776 | 792,755,249 | MDU6SXNzdWU3OTI3NTUyNDk= | 1,776 | [Question & Bug Report] Can we preprocess a dataset on the fly? | [] | closed | false | null | 6 | 2021-01-24T09:28:24Z | 2021-05-20T04:15:58Z | 2021-05-20T04:15:58Z | null | I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache?
BTW, I tried raising `writer_batch_size`. Seems that argument doesn't have any effect when it's larger than `batch_size`, because you are saving all the batch instantly after it's processed. Please check the following code:
https://github.com/huggingface/datasets/blob/0281f9d881f3a55c89aeaa642f1ba23444b64083/src/datasets/arrow_dataset.py#L1532 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1776/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1776/timeline | null | completed | null | null | false | [
"We are very actively working on this. How does your dataset look like in practice (number/size/type of files)?",
"It's a text file with many lines (about 1B) of Chinese sentences. I use it to train language model using https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py",
"Indeed I will submit a PR in a fez days to enable processing on-the-fly :)\r\nThis can be useful in language modeling for tokenization, padding etc.\r\n",
"any update on this issue? ...really look forward to use it ",
"Hi @acul3,\r\n\r\nPlease look at the discussion on a related Issue #1825. I think using `set_transform` after building from source should do.",
"@gchhablani thank you so much\r\n\r\nwill try look at it"
] |
https://api.github.com/repos/huggingface/datasets/issues/3320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3320/comments | https://api.github.com/repos/huggingface/datasets/issues/3320/events | https://github.com/huggingface/datasets/issues/3320 | 1,063,531,992 | I_kwDODunzps4_ZDXY | 3,320 | Can't get tatoeba.rus dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-11-25T12:31:11Z | 2021-11-26T10:30:29Z | 2021-11-26T10:30:29Z | null | ## Describe the bug
It gives an error.
> FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/LASER/raw/master/data/tatoeba/v1/tatoeba.rus-eng.rus
## Steps to reproduce the bug
```python
data=load_dataset("xtreme","tatoeba.rus", split="validation")
```
## Solution
The library tries to access the **master** branch. In the github repo of facebookresearch, it is in the **main** branch. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3320/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3320/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/60 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/60/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/60/comments | https://api.github.com/repos/huggingface/datasets/issues/60/events | https://github.com/huggingface/datasets/pull/60 | 614,372,553 | MDExOlB1bGxSZXF1ZXN0NDE0OTQyNjEy | 60 | Update to simplify some datasets conversion | [] | closed | false | null | 6 | 2020-05-07T22:02:24Z | 2020-05-08T10:38:32Z | 2020-05-08T10:18:24Z | null | This PR updates the encoding of `Values` like `integers`, `boolean` and `float` to use python casting and avoid having to cast in the dataset scripts, as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r420176626
We could also change (not included in this PR yet):
- `supervized_keys` to make them a NamedTuple instead of a dataclass, and
- handle specifically the `Translation` features.
as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r421740236
@patrickvonplaten @mariamabarham tell me if you want these two last changes as well. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/60/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/60/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/60.diff",
"html_url": "https://github.com/huggingface/datasets/pull/60",
"merged_at": "2020-05-08T10:18:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/60.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/60"
} | true | [
"Awesome! ",
"Also we should convert `tf.io.gfile.exists` into `os.path.exists` , `tf.io.gfile.listdir`into `os.listdir` and `tf.io.gfile.glob` into `glob.glob` (will need to add `import glob`)",
"> Also we should convert `tf.io.gfile.exists` into `os.path.exists` , `tf.io.gfile.listdir`into `os.listdir` and `tf.io.gfile.glob` into `glob.glob` (will need to add `import glob`)\r\n\r\nWe should probably open a new PR about this",
"I think it might be a good idea to both change the supervised keys to a named tuple and also handle the translation features specifically.",
"Just noticed that `pyarrow` apparently does not have a `is_boolean` function. Or do I have the wrong `pyarrow` version? ",
"Ah, it was a typo `pa.types.is_boolean` is the correct name. Will fix in: https://github.com/huggingface/nlp/pull/59"
] |
https://api.github.com/repos/huggingface/datasets/issues/4492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4492/comments | https://api.github.com/repos/huggingface/datasets/issues/4492/events | https://github.com/huggingface/datasets/pull/4492 | 1,271,112,497 | PR_kwDODunzps45pktu | 4,492 | Pin the revision in imagenet download links | [] | closed | false | null | 1 | 2022-06-14T17:15:17Z | 2022-06-14T17:35:13Z | 2022-06-14T17:25:45Z | null | Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism.
cc @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4492/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4492/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4492.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4492",
"merged_at": "2022-06-14T17:25:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4492.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4492"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2026/comments | https://api.github.com/repos/huggingface/datasets/issues/2026/events | https://github.com/huggingface/datasets/issues/2026 | 828,194,467 | MDU6SXNzdWU4MjgxOTQ0Njc= | 2,026 | KeyError on using map after renaming a column | [] | closed | false | null | 3 | 2021-03-10T18:54:17Z | 2021-03-11T14:39:34Z | 2021-03-11T14:38:40Z | null | Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
```python
transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])])
def prepare_features(examples):
images = []
labels = []
print(examples)
for example_idx, example in enumerate(examples["image"]):
if transform is not None:
images.append(transform(examples["image"][example_idx].permute(2,0,1)))
else:
images.append(examples["image"][example_idx].permute(2,0,1))
labels.append(examples["label"][example_idx])
output = {"label":labels, "image":images}
return output
raw_dataset = load_dataset('cifar10')
raw_dataset.set_format('torch',columns=['img','label'])
raw_dataset = raw_dataset.rename_column('img','image')
features = datasets.Features({
"image": datasets.Array3D(shape=(3,32,32),dtype="float32"),
"label": datasets.features.ClassLabel(names=[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]),
})
train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)
```
The error:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-54-bf29672c53ee> in <module>()
14 ]),
15 })
---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)
2 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1287 test_inputs = self[:2] if batched else self[0]
1288 test_indices = [0, 1] if batched else 0
-> 1289 update_data = does_function_return_dict(test_inputs, test_indices)
1290 logger.info("Testing finished, running the mapping function on the dataset")
1291
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices)
1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]
1259 processed_inputs = (
-> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1261 )
1262 does_return_dict = isinstance(processed_inputs, Mapping)
<ipython-input-52-b4dccbafb70d> in prepare_features(examples)
3 labels = []
4 print(examples)
----> 5 for example_idx, example in enumerate(examples["image"]):
6 if transform is not None:
7 images.append(transform(examples["image"][example_idx].permute(2,0,1)))
KeyError: 'image'
```
The print statement inside returns this:
```python
{'label': tensor([6, 9])}
```
Apparently, both `img` and `image` do not exist after renaming.
Note that this code works fine with `img` everywhere.
Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2026/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2026/timeline | null | completed | null | null | false | [
"Hi,\r\n\r\nActually, the error occurs due to these two lines:\r\n```python\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n```\r\n`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format`, with a new column name which is why this new column is missing in the output.",
"Hi @mariosasko,\n\nThanks for opening a PR on this :)\nWhy does the old name also disappear?",
"I just merged a @mariosasko 's PR that fixes this issue.\r\nIf it happens again, feel free to re-open :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1874/comments | https://api.github.com/repos/huggingface/datasets/issues/1874/events | https://github.com/huggingface/datasets/pull/1874 | 807,786,094 | MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy | 1,874 | Adding Europarl Bilingual dataset | [] | closed | false | null | 7 | 2021-02-13T17:02:04Z | 2021-03-04T10:38:22Z | 2021-03-04T10:38:22Z | null | Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php).
This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some keys that references to inexistent sentences).
I chose to follow the the style of a similar dataset available in this repository: `multi_para_crawl`.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1874/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1874/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1874.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1874",
"merged_at": "2021-03-04T10:38:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1874.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1874"
} | true | [
"is there a way to check errors without subscribing to CircleCI? Because they want access to private repositories when logging.",
"I think you need to be logged in to check the errors unfortunately. Feel free to create an account with bitbucket maybe if you don't want it to access your private github repos",
"I've resolved some requirements, but I cannot create dummy data. The dataset works as follows: for each language pair `<lang1>-<lang2>` 3 files are downloaded:\r\n- dataset for `<lang1>`\r\n- dataset for `<lang2>`\r\n- alignments between `<lang1>` and `<lang2>`\r\n\r\nSuppose we work with the `bg-cs` language pair. Then, the dataset will download three `gzip` files which should be decompressed. I do not understand the relation between the folders created by the script to create dummy data and the original data provided by the download manager.",
"Hi ! Indeed the data files structure of this dataset looks very specific.\r\nThe command `datasets-cli dummy_data ./datasets/europarl_bilingual` shows some instructions for each split but let me add more details.\r\n\r\nFirst things to know is that the dummy data files need to be uncompressed data, so for example for the file `bg.zip` you should actually have one folder with all the xml files in it instead. In the same way, `bg-cs.xml.gz` must be replaced by an actual uncompressed xml file.\r\n\r\nLet's take the bg-cs config as an example. To make the dummy data you need to:\r\n- go to `./datasets/europarl_bilingual/dummy/bg-cs/8.0.0` and create a folder named `dummy_data`. Then go inside this folder\r\n- create a text file named `bg-cs.xml.gz` containing xml content (so without .gz compression). The xml content must have the same structure as the original `bg-cs.zml` but only include 1 `linkGrp` entry. You can pick one entry from the original `bg-cs.xml` file. Let's say this entry is about this file: `ep-06-01-16-003.xml`\r\n- create a folder named `bg.zip` and inside this folder add one file Europarl/raw/bg/ep-06-01-16-003.xml. You can pick the xml file from the original `bg.zip` archive.\r\n- create a folder named `cs.zip` and inside this folder add one file Europarl/raw/cs/ep-06-01-16-003.xml. You can pick the xml file from the original `cs.zip` archive.\r\n- zip the `dummy_data` into `dummy_data.zip`\r\n\r\nAt this point you have dummy data files to generate 1 example which is what we want to be able to test the dataset script `europarl_bilingual.py` with pytest. \r\n\r\nIn particular this will make this test pass:\r\n```\r\npytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_europarl_bilingual\r\n```\r\n\r\nIdeally it would be awesome to have dummy data for all the different configs so if we manage to make a script that generates all of it automatically that would be perfect. However since the structure is not trivial, another option would be to only have the dummy data for only 1 or 2 configs, like what we do for [bible_para](https://github.com/huggingface/datasets/blob/master/datasets/bible_para/bible_para.py) for example. In `bible_para` only a few configurations are tested. As you can see there is only 6 configs in the `BUILDER_CONFIGS` attribute. All the other configs can still be used, here is what is said inside the dataset card of bible_para:\r\n```\r\nTo load a language pair which isn't part of the config, all you need to do is specify the language code as pairs.\r\nYou can find the valid pairs in Homepage section of Dataset Description: http://opus.nlpl.eu/bible-uedin.php\r\nE.g.\r\n\r\n`dataset = load_dataset(\"bible_para\", lang1=\"fi\", lang2=\"hi\")`\r\n```\r\nIn this case the configuration \"fi-hi\" is simply created on the fly, instead of being picked from the `BUILDER_CONFIGS` list.\r\n\r\nI hope this helps, let me know if you have questions or if I can help",
"I already created the scripts to create reduced versions of the data. What I didn't understand was how to put files in the dummy_data folder because, as you noticed, some file decompress to a nested tree structure. I will now try again with your suggestions!",
"Is there something else I should do? If not can this be integrated?",
"Thanks a lot !!\r\nSince the set of all the dummy data files is quite big I only kept a few of them. If we had kept them all the size of the `datasets` repo would have increased too much :/\r\nSo I did the same as for `bible_para`: only keep a few configurations in BUILDER_CONFIGS and have all the other pairs loadable with the lang1 and lang2 parameters like this:\r\n\r\n`dataset = load_dataset(\"europarl_bilingual\", lang1=\"fi\", lang2=\"fr\")`"
] |
https://api.github.com/repos/huggingface/datasets/issues/2133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2133/comments | https://api.github.com/repos/huggingface/datasets/issues/2133/events | https://github.com/huggingface/datasets/issues/2133 | 843,149,680 | MDU6SXNzdWU4NDMxNDk2ODA= | 2,133 | bug in mlqa dataset | [] | closed | false | null | 3 | 2021-03-29T09:03:09Z | 2021-03-30T17:40:57Z | 2021-03-30T17:40:57Z | null | Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0645 \u0645\u0631\u0629 \u064a\u062a\u0645 \u0646\u0634\u0631\u0647\u0627 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0645\u0627 \u0647\u064a \u0627\u0644\u0648\u0631\u0642\u0629 \u0627\u0644\u064a\u0648\u0645\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0643\u0645 \u0639\u062f\u062f \u0627\u0644\u0627\u0648\u0631\u0627\u0642 \u0627\u0644\u0627\u062e\u0628\u0627\u0631\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0627\u0644\u062a\u064a \u0648\u062c\u062f\u062a \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0641\u064a \u0627\u064a \u0633\u0646\u0629 \u0628\u062f\u0627\u062a \u0648\u0631\u0642\u0629 \u0627\u0644\u0637\u0627\u0644\u0628 \u0627\u0644\u062d\u0633 \u0627\u0644\u0633\u0644\u064a\u0645 \u0628\u0627\u0644\u0646\u0634\u0631 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?"
]
```
the questions are in the wrong format, and not readable, could you please have a look? thanks @lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2133/timeline | null | completed | null | null | false | [
"If you print those questions, you get readable texts:\r\n```python\r\n>>> questions = [\r\n... \"\\u0645\\u062a\\u0649 \\u0628\\u062f\\u0627\\u062a \\u0627\\u0644\\u0645\\u062c\\u0644\\u0629 \\u0627\\u0644\\u0645\\u062f\\u0631\\u0633\\u064a\\u0629 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645 \\u0628\\u0627\\u0644\\u0646\\u0634\\u0631?\",\r\n... \"\\u0643\\u0645 \\u0645\\u0631\\u0629 \\u064a\\u062a\\u0645 \\u0646\\u0634\\u0631\\u0647\\u0627 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0645\\u0627 \\u0647\\u064a \\u0627\\u0644\\u0648\\u0631\\u0642\\u0629 \\u0627\\u0644\\u064a\\u0648\\u0645\\u064a\\u0629 \\u0644\\u0644\\u0637\\u0644\\u0627\\u0628 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0643\\u0645 \\u0639\\u062f\\u062f \\u0627\\u0644\\u0627\\u0648\\u0631\\u0627\\u0642 \\u0627\\u0644\\u0627\\u062e\\u0628\\u0627\\u0631\\u064a\\u0629 \\u0644\\u0644\\u0637\\u0644\\u0627\\u0628 \\u0627\\u0644\\u062a\\u064a \\u0648\\u062c\\u062f\\u062a \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0641\\u064a \\u0627\\u064a \\u0633\\u0646\\u0629 \\u0628\\u062f\\u0627\\u062a \\u0648\\u0631\\u0642\\u0629 \\u0627\\u0644\\u0637\\u0627\\u0644\\u0628 \\u0627\\u0644\\u062d\\u0633 \\u0627\\u0644\\u0633\\u0644\\u064a\\u0645 \\u0628\\u0627\\u0644\\u0646\\u0634\\u0631 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\"\r\n... ]\r\n>>> print(questions)\r\n['متى بدات المجلة المدرسية في نوتردام بالنشر?', 'كم مرة يتم نشرها في نوتردام?', 'ما هي الورقة اليومية للطلاب في نوتردام?', 'كم عدد الاوراق الاخبارية للطلاب التي وجدت في نوتردام?', 'في اي سنة بدات ورقة الطالب الحس السليم بالنشر في نوتردام?']\r\n```\r\nI don't think we can change this",
"Hi @dorost1234.\r\n\r\nIn Python 3, strings are sequences of Unicode _code points_. Unicode is a specification that maps all characters (and emoji symbols) with its unique representation in terms of code points. That is what you see: Unicode code points (represented by a \\u escaped sequence of 16-bit hex values).\r\n\r\nCharacters are usually represented (on screen and papers) with a graphical element called _glyph_. That is what you would like to see: glyphs. But Python does not care about glyphs: that is the job of the GUI or the terminal; glyphs are what you get with the `print` function (if your terminal is properly configured to display those glyphs).\r\n\r\nYou have more detailed information about Unicode in the Python documentation: https://docs.python.org/3/howto/unicode.html",
"thank you so much for the insightful comments. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1115/comments | https://api.github.com/repos/huggingface/datasets/issues/1115/events | https://github.com/huggingface/datasets/issues/1115 | 757,127,527 | MDU6SXNzdWU3NTcxMjc1Mjc= | 1,115 | Incorrect URL for MRQA SQuAD train subset | [] | closed | false | null | 1 | 2020-12-04T14:05:24Z | 2020-12-06T17:14:22Z | 2020-12-06T17:14:22Z | null | https://github.com/huggingface/datasets/blob/4ef4c8f8b7a60e35c6fa21115fca9faae91c9f74/datasets/mrqa/mrqa.py#L53
The URL for `train+SQuAD` subset of MRQA points to the dev set instead of train set. It should be `https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1115/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1115/timeline | null | completed | null | null | false | [
"good catch !"
] |
https://api.github.com/repos/huggingface/datasets/issues/432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/432/comments | https://api.github.com/repos/huggingface/datasets/issues/432/events | https://github.com/huggingface/datasets/pull/432 | 665,234,340 | MDExOlB1bGxSZXF1ZXN0NDU2MzQxNDk3 | 432 | Fix handling of config files while loading datasets from multiple processes | [] | closed | false | null | 4 | 2020-07-24T15:10:57Z | 2020-08-01T17:11:42Z | 2020-07-30T08:25:28Z | null | When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written.
This pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/432/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/432/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/432.diff",
"html_url": "https://github.com/huggingface/datasets/pull/432",
"merged_at": "2020-07-30T08:25:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/432.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/432"
} | true | [
"Ok for this but I think we may want to use the general `filelock` method we are using at other places in the library instead of filecmp (in particular `filelock` take care of being an atomic operation which is safer for concurrent processes)",
"Ok I see.\r\nWhy not use filelock in this case then ?",
"I think we should 🙂",
"Thanks for approving my patch.\n\nI agree that if copying is needed then some locking mechanism should be put in place. But, I don't think a file should be needlessly copied without a check. So I guess the flow should be, lock => copy if needed => unlock, and add locks wherever else that file is being accessed.\n\nI'll also add that my personal experience with filelock on a different project hasn't been that great, and on some occasions a process somehow got through the lock -- I've never gotten to the bottom of that but it tainted my view of that module. Perhaps it's been fixed (or I just miss used it), but thought you should know to take steps to test it."
] |
https://api.github.com/repos/huggingface/datasets/issues/2159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2159/comments | https://api.github.com/repos/huggingface/datasets/issues/2159/events | https://github.com/huggingface/datasets/issues/2159 | 848,851,962 | MDU6SXNzdWU4NDg4NTE5NjI= | 2,159 | adding ccnet dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2021-04-01T23:28:36Z | 2021-04-02T10:05:19Z | 2021-04-02T10:05:19Z | null | ## Adding a Dataset
- **Name:** ccnet
- **Description:**
Common Crawl
- **Paper:**
https://arxiv.org/abs/1911.00359
- **Data:**
https://github.com/facebookresearch/cc_net
- **Motivation:**
this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite important for cross-lingual reseach
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2159/timeline | null | completed | null | null | false | [
"closing since I think this is cc100, just the name has been changed. thanks "
] |
https://api.github.com/repos/huggingface/datasets/issues/3585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3585/comments | https://api.github.com/repos/huggingface/datasets/issues/3585/events | https://github.com/huggingface/datasets/issues/3585 | 1,105,821,470 | I_kwDODunzps5B6X8e | 3,585 | Datasets streaming + map doesn't work for `Audio` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | 1 | 2022-01-17T12:55:42Z | 2022-01-20T13:28:00Z | 2022-01-20T13:28:00Z | null | ## Describe the bug
When using audio datasets in streaming mode, applying a `map(...)` before iterating leads to an error as the key `array` does not exist anymore.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "en", streaming=True, split="train")
def map_fn(batch):
print("audio keys", batch["audio"].keys())
batch["audio"] = batch["audio"]["array"][:100]
return batch
ds = ds.map(map_fn)
sample = next(iter(ds))
```
I think the audio is somehow decoded before `.map(...)` is actually called.
## Expected results
IMO, the above code snippet should work.
## Actual results
```bash
audio keys dict_keys(['path', 'bytes'])
Traceback (most recent call last):
File "./run_audio.py", line 15, in <module>
sample = next(iter(ds))
File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 192, in __iter__
yield key, self.function(example)
File "./run_audio.py", line 9, in map_fn
batch["input"] = batch["audio"]["array"][:100]
KeyError: 'array'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.1.dev0
- Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3585/timeline | null | completed | null | null | false | [
"This seems related to https://github.com/huggingface/datasets/issues/3505."
] |
https://api.github.com/repos/huggingface/datasets/issues/3905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3905/comments | https://api.github.com/repos/huggingface/datasets/issues/3905/events | https://github.com/huggingface/datasets/pull/3905 | 1,168,320,568 | PR_kwDODunzps40ZJQJ | 3,905 | Perplexity Metric Card | [] | closed | false | null | 3 | 2022-03-14T12:39:40Z | 2022-03-16T19:38:56Z | 2022-03-16T19:38:56Z | null | Add Perplexity metric card
Note that it is currently still missing the citation, but I plan to add it later today. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3905/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3905/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3905.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3905",
"merged_at": "2022-03-16T19:38:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3905.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3905"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3905). All of your documentation changes will be reflected on that endpoint.",
"I'm wondering if we should add that perplexity can be used for analyzing datasets as well",
"Otherwise, looks good! Good job, @emibaylor !"
] |
https://api.github.com/repos/huggingface/datasets/issues/3349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3349/comments | https://api.github.com/repos/huggingface/datasets/issues/3349/events | https://github.com/huggingface/datasets/pull/3349 | 1,067,853,601 | PR_kwDODunzps4vOF-s | 3,349 | raise exception instead of using assertions. | [] | closed | false | null | 6 | 2021-12-01T01:37:51Z | 2021-12-20T16:07:27Z | 2021-12-20T16:07:27Z | null | fix for the remaining files https://github.com/huggingface/datasets/issues/3171 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3349/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3349/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3349.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3349",
"merged_at": "2021-12-20T16:07:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3349.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3349"
} | true | [
"@mariosasko - Thanks for the review & suggestions. Updated as per the suggestions. ",
"@mariosasko - Hello, Are there any additional changes required from my end??. Wondering if this PR can be merged or still pending on additional steps.",
"@mariosasko - The approved changes in the PR now has conflicts with the master branch. Would you like me to resolve the conflicts??. Let me know. ",
"@mariosasko @lhoestq - Gentle reminder about my previous question. ",
"Hi ! Thanks for the heads up :)\r\nI just resolved the conflicts, it should be alright now",
"Merging, thanks for the help @manisnesan !"
] |
https://api.github.com/repos/huggingface/datasets/issues/3831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3831/comments | https://api.github.com/repos/huggingface/datasets/issues/3831/events | https://github.com/huggingface/datasets/issues/3831 | 1,160,501,000 | I_kwDODunzps5FK9cI | 3,831 | when using to_tf_dataset with shuffle is true, not all completed batches are made | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2022-03-06T02:43:50Z | 2022-03-08T15:18:56Z | 2022-03-08T15:18:56Z | null | ## Describe the bug
when converting a dataset to tf_dataset by using to_tf_dataset with shuffle true, the remainder is not converted to one batch
## Steps to reproduce the bug
this is the sample code below
https://colab.research.google.com/drive/1_oRXWsR38ElO1EYF9ayFoCU7Ou1AAej4?usp=sharing
## Expected results
regardless of shuffle is true or not, 67 rows dataset should be 5 batches when batch size is 16.
## Actual results
4 batches
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3831/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3831/timeline | null | completed | null | null | false | [
"Maybe @Rocketknight1 can help here",
"Hi @greenned, this is expected behaviour for `to_tf_dataset`. By default, we drop the smaller 'remainder' batch during training (i.e. when `shuffle=True`). If you really want to keep that batch, you can set `drop_remainder=False` when calling `to_tf_dataset()`.",
"@Rocketknight1 Oh, thank you. I didn't get **drop_remainder** Have a nice day!",
"No problem!\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3208/comments | https://api.github.com/repos/huggingface/datasets/issues/3208/events | https://github.com/huggingface/datasets/pull/3208 | 1,044,504,093 | PR_kwDODunzps4uFTIs | 3,208 | Pin keras version until TF fixes its release | [] | closed | false | null | 0 | 2021-11-04T09:13:32Z | 2021-11-04T09:30:55Z | 2021-11-04T09:30:54Z | null | Fix #3207. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3208/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3208/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3208.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3208",
"merged_at": "2021-11-04T09:30:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3208.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3208"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5137/comments | https://api.github.com/repos/huggingface/datasets/issues/5137/events | https://github.com/huggingface/datasets/issues/5137 | 1,414,642,723 | I_kwDODunzps5UUbwj | 5,137 | Align task tags in dataset metadata | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 14 | 2022-10-19T09:41:42Z | 2022-11-10T05:25:58Z | 2022-10-25T06:17:00Z | null | ## Describe
Once we have agreed on a common naming for task tags for all open source projects, we should align on them.
## Steps
- [x] Align task tags in canonical datasets
- [x] task_categories: 4 datasets
- [x] task_ids (by @lhoestq)
- [x] Open PRs in community datasets
- [x] task_categories: 451 datasets
- [x] task_ids: 556 datasets
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5137/timeline | null | completed | null | null | false | [
"I removed all the invalid task_ids in datasts without namespace, based on the <s>(internal)</s> types.ts",
"(Types.ts is not internal it's public)",
"I have opened PRs to fix the task_ids in all datasets within a namespace as well.\r\n\r\nWorking on task_categories...",
"For future reference: this fix had some complications\r\n\r\nWhen trying to open a PR to fix the task tags, an exception was thrown if:\r\n- the metadata contained \"languages\" or \"licenses\" (instead of \"language\" or \"license\")\r\n- the metadata contained a non-valid language: `en-US` (instead of `en`), `no` (instead of `'no'`),...\r\n- the metadata contained a non-valid license\r\n- either `task_categories` or `task_ids` was not an array (a dict for each config)\r\n- the metadata contained non-valid tag names\r\n\r\nErrors:\r\n```\r\nValueError: - Error: \"languages\" is deprecated. Use \"language\" instead.\r\n```\r\n```\r\nValueError: - Error: \"licenses\" is deprecated. Use \"license\" instead.\r\n```\r\n```\r\nValueError: - Error: \"language[17]\" must only contain lowercase characters\r\n```\r\n```\r\nValueError: - Error: \"language[0]\" with value \"cz, de, it\" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like \"code\", \"multilingual\". If you want to use BCP-47 identifiers, you can specify them in language_bcp47.\r\n```\r\n```\r\nValueError: - Error: \"task_ids\" must be an array\r\n```",
"All Hub datasets are done.",
"great job! did you have feedback from Hub users/i.E. repo authors?",
"Yes, @julien-c. These are some of the feedbacks:\r\n- Most people just thank for the fix: [cahya/librivox-indonesia](https://huggingface.co/datasets/cahya/librivox-indonesia/discussions/1#6357cd8a292a050ebd705f84), [TurkuNLP/xlsum-fi](https://huggingface.co/datasets/TurkuNLP/xlsum-fi/discussions/1#6357828aa1f8ad1c31bcbe46), [coastalcph/fairlex](https://huggingface.co/datasets/coastalcph/fairlex/discussions/4#6351a527a8e595171ab1aef2)\r\n- Why are we changing their task names? [joelito/lextreme](https://huggingface.co/datasets/joelito/lextreme/discussions/1#6351b576fe367c0d9b12041b)\r\n - I take note of this for the next bulk operation; besides the PR title, we should also add a description to explain the reason for the change and also maybe putting a link to some pertinent GH Issue page\r\n- Some of them ask where to find the list of the supported task values is: [dennlinger/klexikon](https://huggingface.co/datasets/dennlinger/klexikon/discussions/3#6356b3ea80f8cb3ab777ac5c), [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad/discussions/1#635262467e4cc3135fd09f58)\r\n - Currently, the list is here: https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts#L85\r\n - Maybe we could made them more easily accessible\r\n- Some people do not agree about current \"hierarchy\":\r\n - text-scoring: [emrecan/nli_tr_for_simcse](https://huggingface.co/datasets/emrecan/nli_tr_for_simcse/discussions/1#6357c1b128792d8cdd51e9f9) (but referring to [emrecan/nli_tr_for_simcse](https://huggingface.co/datasets/emrecan/nli_tr_for_simcse/discussions/2/files))\r\n - Before \"text-scoring\" was a task_category, with task_ids [\"semantic-similarity-scoring\", \"sentiment-scoring\"]\r\n - Now all three are task_ids [\"text-scoring\", \"semantic-similarity-scoring\", \"sentiment-scoring\"] under the task_category \"text-classification\"\r\n - People complain that their scoring tasks are not classification task\r\n - binary-classification: why don't we have binary-classification? We have multi-class-classification, multi-label-classification and sentiment-classification, but not binary-classification\r\n - symbolic-regression: [yoshitomo-matsubara/srsd-feynman_hard](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard/discussions/2#63614194c12a09b8a31457cc), [yoshitomo-matsubara/srsd-feynman_medium](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium/discussions/2#6361418aeee0d27f04379e43), [yoshitomo-matsubara/srsd-feynman_easy](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy/discussions/2#6361416e00905b1ffb8d0112)\r\n - Why don't we have symbolic-regression task?\r\n\r\nNOTE: I'm editing this comment to add more feedback",
"As someone with feedback on the updates (which I highly appreciate seeing included here :D), a few comments from a \"user perspective\": \r\n\r\n* I think the general confusion for me was also surrounding the hierarchy; it doesn't really become super clear (even when using the tagger space) that one is a subset of the other, especially since it seems to be still possible to include fine-grained tasks without the \"parent category\"?\r\n* The datasets explorer still shows tags that are no longer valid (e.g., super specific ones such as `summarization-other-paper-abstract-generation`, but also ones that should be `task_categories`, such as `summarization`). I'm assuming this will be fixed soon, but until then it can confuse people who don't understand why they suddenly can't use seemingly still valid tags anymore.\r\n* As I mentioned to @albertvillanova, having a dedicated page in the docs with explanations (especially wrt the difference between `task_categories` and `task_ids`) would be super helpful. However, I think it would have been sufficient to just include some description in the dataset PRs where you can link to the Github/other discussion on the topic :) That way, I can check myself what changes are expected to happen.\r\n\r\nThanks again for the streamlining process, I personally learned a fair bit about the tagging structure in the meantime!\r\nBest,\r\nDennis",
"Thanks to you both for your feedback! super useful! cc'ing @osanseviero too 🙂\r\n\r\n> The datasets explorer still shows tags that are no longer valid\r\n\r\nwait which explorer is that? is it https://huggingface.co/datasets/viewer/ ?\r\n",
"Sorry, this one: https://huggingface.co/datasets \r\nAnd then selecting the \"Fine-Grained Tasks\".",
"good feedback! we'll improve this",
"Super useful feedback, thanks a lot!",
"- Some people do not agree about current \"hierarchy\":\r\n - symbolic-regression: [yoshitomo-matsubara/srsd-feynman_hard](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_hard/discussions/2#63614194c12a09b8a31457cc), [yoshitomo-matsubara/srsd-feynman_medium](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_medium/discussions/2#6361418aeee0d27f04379e43), [yoshitomo-matsubara/srsd-feynman_easy](https://huggingface.co/datasets/yoshitomo-matsubara/srsd-feynman_easy/discussions/2#6361416e00905b1ffb8d0112)\r\n - Why don't we have symbolic-regression task?",
"@albertvillanova \r\nThank you for sharing our voice here!\r\n\r\nYes, we want `symbolic-regression` to be listed as a task. This task has been attracting attention from the machine learning/deep learning community, and unfortunately existing symbolic regression datasets are de-centralized in the community (hosted at individual platforms like author website, github, etc).\r\nIt would be great for the community if Hugging Face can support the task."
] |
https://api.github.com/repos/huggingface/datasets/issues/5172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5172/comments | https://api.github.com/repos/huggingface/datasets/issues/5172/events | https://github.com/huggingface/datasets/issues/5172 | 1,425,523,114 | I_kwDODunzps5U98Gq | 5,172 | Inconsistency behavior between handling local file protocol and other FS protocols | [] | open | false | null | 0 | 2022-10-27T12:03:20Z | 2022-10-27T12:05:19Z | null | null | ### Describe the bug
These lines us used during load_from_disk:
```
if is_remote_filesystem(fs):
dest_dataset_dict_path = extract_path_from_uri(dataset_dict_path)
else:
fs = fsspec.filesystem("file")
dest_dataset_dict_path = dataset_dict_path
```
If a local FS is given, then it will the URL as the path name. If a remote Fs is given, then it will use the path of the URL. This is an inconsistent behavior when handling a file: when using remote FS, you must write a URL, but for local FS, even if you passed LocalFileSystem as `fs` you still can't use a `file://` URL. It will be recognized as a directory named `file:`.
### Steps to reproduce the bug
```
import fsspec.core
url = "hdfs:///somewhere/MNIST"
# url = "file:///somewhere/MNIST"
fs, path = fsspec.core.url_to_fs(url)
fs.ls(path) # this will always work
load_from_disk(path, fs) # only works for local FS
load_from_disk(url, fs) # only works for remote FS
```
### Expected behavior
one of `url` or `path` should always work
I think we extract path from given URL by using `fsspec.core.url_to_fs` instead of using `is_remote_filesystem` and `extract_path_from_uri` will fix this, since:
```
fsspec.core.url_to_fs("/somewhere/MNIST") -> LocalFs, '/somewhere/MNIST'
fsspec.core.url_to_fs("file:///somewhere/MNIST") -> LocalFs, '/somewhere/MNIST'
fsspec.core.url_to_fs("hdfs:///somewhere/MNIST") -> HDFS, '/somewhere/MNIST'
```
and
```
fsspec.core.url_to_fs("file:///somewhere/MNIST") == fsspec.core.url_to_fs("/somewhere/MNIST")
```
In theory, this wouldn't break anything, since giving local path and remote uri still works. It will only affect local URI (make it works too)
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.4.205.1**HIDDEN**
- Python version: 3.7.10
- PyArrow version: 8.0.0
- Pandas version: 1.2.4
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5172/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5172/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2954/comments | https://api.github.com/repos/huggingface/datasets/issues/2954/events | https://github.com/huggingface/datasets/pull/2954 | 1,003,904,803 | PR_kwDODunzps4sHa8O | 2,954 | Run tests in parallel | [] | closed | false | null | 2 | 2021-09-22T07:00:44Z | 2021-09-28T06:55:51Z | 2021-09-28T06:55:51Z | null | Run CI tests in parallel to speed up the test suite.
Speed up results:
- Linux: from `7m 30s` to `5m 32s`
- Windows: from `13m 52s` to `11m 10s`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2954/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2954/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2954.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2954",
"merged_at": "2021-09-28T06:55:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2954.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2954"
} | true | [
"There is a speed up in Windows machines:\r\n- From `13m 52s` to `11m 10s`\r\n\r\nIn Linux machines, some workers crash with error message:\r\n```\r\nOSError: [Errno 12] Cannot allocate memory\r\n```",
"There is also a speed up in Linux machines:\r\n- From `7m 30s` to `5m 32s`"
] |
https://api.github.com/repos/huggingface/datasets/issues/5096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5096/comments | https://api.github.com/repos/huggingface/datasets/issues/5096/events | https://github.com/huggingface/datasets/issues/5096 | 1,403,379,816 | I_kwDODunzps5TpeBo | 5,096 | Transfer some canonical datasets under an organization namespace | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | open | false | null | 2 | 2022-10-10T15:44:31Z | 2023-06-07T07:51:54Z | null | null | As discussed during our @huggingface/datasets meeting, we are planning to move some "canonical" dataset scripts under their corresponding organization namespace (if this does not exist).
On the contrary, if the dataset already exists under the organization namespace, we are deprecating the canonical one (and eventually delete it).
First, we should test it using a dummy dataset/organization.
TODO:
- [x] Test with a dummy dataset
- [x] Create dummy canonical dataset: https://huggingface.co/datasets/dummy_canonical_dataset
- [x] Create dummy organization: https://huggingface.co/dummy-canonical-org
- [x] Transfer dummy canonical dataset to dummy organization
- [ ] Transfer datasets
- [x] babi_qa => facebook
- [x] cord19 => allenai
- [x] emotion => dair-ai
- [ ] gem => GEM
- [x] hendrycks_test => cais/mmlu
- [x] indonlu => indonlp
- [ ] multilingual_librispeech => facebook
- It already exists "facebook/multilingual_librispeech"
- [ ] oscar => oscar-corpus
- [x] peer_read => allenai
- [x] qasper => allenai
- [x] reddit => webis/tldr-17
- [x] russian_super_glue => russiannlp
- [x] rvl_cdip => aharley
- [x] s2orc => allenai
- [x] scicite => allenai
- [x] scifact => allenai
- [x] scitldr => allenai
- [x] swiss_judgment_prediction => rcds
- [x] the_pile => EleutherAI
- [ ] wmt14, wmt15, wmt16, wmt17, wmt18, wmt19,... => wmt
- [ ] Deprecate (and eventually remove) datasets that cannot be transferred because they already exist
- [x] banking77 => PolyAI
- [x] common_voice => mozilla-foundation
- [x] german_legal_entity_recognition => elenanereiss
- ...
EDIT: the list above is continuously being updated | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5096/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5096/timeline | null | null | null | null | false | [
"The transfer of the dummy dataset to the dummy org works as expected:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"dummy_canonical_dataset\", download_mode=\"force_redownload\"); ds\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.98k/2.98k [00:00<00:00, 2.01MB/s]\r\nDownloading and preparing dataset dummy_canonical_dataset/default (download: 411 bytes, generated: 385 bytes, post-processed: Unknown size, total: 796 bytes) to .../.cache/huggingface/datasets/dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4...\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 411/411 [00:00<00:00, 293kB/s]\r\nDataset dummy_canonical_dataset downloaded and prepared to .../.cache/huggingface/datasets/dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 304.16it/s]\r\nOut[1]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['langs', 'ner_tags', 'tokens'],\r\n num_rows: 3\r\n })\r\n})\r\n\r\nIn [2]: from datasets import load_dataset; ds = load_dataset(\"dummy-canonical-org/dummy_canonical_dataset\"); ds\r\nDownloading builder script: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.98k/2.98k [00:00<00:00, 1.57MB/s]\r\nDownloading and preparing dataset dummy_canonical_dataset/default to .../.cache/huggingface/datasets/dummy-canonical-org___dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4...\r\nDataset dummy_canonical_dataset downloaded and prepared to .../.cache/huggingface/datasets/dummy-canonical-org___dummy_canonical_dataset/default/1.0.0/100870c358637e269fee140585e61e1472d5075a9bf6f866719934c725e55fb4. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 362.48it/s]\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['langs', 'ner_tags', 'tokens'],\r\n num_rows: 3\r\n })\r\n})\r\n```",
"Cool ! 🚀 "
] |
https://api.github.com/repos/huggingface/datasets/issues/1139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1139/comments | https://api.github.com/repos/huggingface/datasets/issues/1139/events | https://github.com/huggingface/datasets/pull/1139 | 757,393,158 | MDExOlB1bGxSZXF1ZXN0NTMyNzc3OTg2 | 1,139 | Add ReFreSD dataset | [] | closed | false | null | 3 | 2020-12-04T20:45:11Z | 2020-12-16T16:01:18Z | 2020-12-16T16:01:18Z | null | This PR adds the **ReFreSD dataset**.
The original data is hosted [on this github repo](https://github.com/Elbria/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data.
Need feedback on:
- I couldn't generate the dummy data. The file we download is a tsv file, but without extension, I suppose this is the problem. I'm sure there is a simple trick to make this work.
- The feature names.
- I don't know if it's better to stick to the classic `sentence1`, `sentence2` or to `sentence_en`, `sentence_fr` to be more explicit.
- There is a binary label (called `label`, no problem here), and a 3-class label called `#3_labels` in the original tsv. I changed it to `all_labels` but I'm sure there is better.
- The rationales are lists of integers, extracted as a string at first. I wonder what's the best way to treat them, any idea? Also, I couldn't manage to make a `Sequence` of `int8` but I'm sure I've missed something simple.
Thanks in advance | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1139/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1139.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1139",
"merged_at": "2020-12-16T16:01:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1139.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1139"
} | true | [
"Cool dataset! Replying in-line:\r\n\r\n> This PR adds the **ReFreSD dataset**.\r\n> The original data is hosted [on this github repo](https://github.com/Elbria/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data.\r\n> \r\n> Need feedback on:\r\n> \r\n> * I couldn't generate the dummy data. The file we download is a tsv file, but without extension, I suppose this is the problem. I'm sure there is a simple trick to make this work.\r\n\r\nyou can use `--match_text_files` in the dummy data generation:\r\n`python datasets-cli dummy_data datasets/refresd --auto_generate --match_text_files \"REFreSD_rationale\"`\r\n\r\n> * The feature names.\r\n> \r\n> * I don't know if it's better to stick to the classic `sentence1`, `sentence2` or to `sentence_en`, `sentence_fr` to be more explicit.\r\n\r\nIt would actually be even better to use the `Translation` feature here to replace best:\r\n`\"sentence_pair\": datasets.Translation(languages=['en', 'fr']),`\r\n\r\nThen during `_generate_examples` this filed should look like\"\r\n`{\"sentence_pair\": {\"fr\": french, \"en\": english}}`\r\n\r\n> * There is a binary label (called `label`, no problem here), and a 3-class label called `#3_labels` in the original tsv. I changed it to `all_labels` but I'm sure there is better.\r\nLooks good!\r\n\r\n> * The rationales are lists of integers, extracted as a string at first. I wonder what's the best way to treat them, any idea? Also, I couldn't manage to make a `Sequence` of `int8` but I'm sure I've missed something simple.\r\n\r\nHaving the feature declared as `\"rationale_en\": datasets.Sequence(datasets.Value(\"int32\"))` should work\r\n\r\n> \r\n> Thanks in advance\r\n\r\nHope that helps you out! Don't forget to `make style`, rebase from master, and run all the tests before pushing again! You will also need to add a `README.md` as described in the guide:\r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card",
"Thanks a lot for the answer, that does help a lot !\r\nI opened a PR for a License in the original repo so I was waiting for that for the model card. If there is no news on Monday, I'll add it without License. ",
"Looks good! It looks like it might need a rebase to pass the tests. Once you do that, should be good to go!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3688/comments | https://api.github.com/repos/huggingface/datasets/issues/3688/events | https://github.com/huggingface/datasets/issues/3688 | 1,127,218,321 | I_kwDODunzps5DL_yR | 3,688 | Pyarrow version error | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2022-02-08T12:53:59Z | 2022-02-09T06:35:33Z | 2022-02-09T06:35:32Z | null | ## Describe the bug
I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error:
`To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match this condition`.
i tryed with all version of pyarrow execpt `4.0.0` but still get the same error.
## Steps to reproduce the bug
```python
import datasets
```
## Expected results
A clear and concise description of the expected results.
## Actual results
AttributeError Traceback (most recent call last)
<ipython-input-19-652e886d387f> in <module>
----> 1 import datasets
~\AppData\Local\Continuum\anaconda3\lib\site-packages\datasets\__init__.py in <module>
26
27
---> 28 if _version.parse(pyarrow.__version__).major < 3:
29 raise ImportWarning(
30 "To use `datasets`, the module `pyarrow>=3.0.0` is required, and the current version of `pyarrow` doesn't match this condition.\n"
AttributeError: 'Version' object has no attribute 'major'
## Environment info
Traceback (most recent call last):
File "c:\users\alex\appdata\local\continuum\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\alex\appdata\local\continuum\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Alex\AppData\Local\Continuum\anaconda3\Scripts\datasets-cli.exe\__main__.py", line 5, in <module>
File "c:\users\alex\appdata\local\continuum\anaconda3\lib\site-packages\datasets\__init__.py", line 28, in <module>
if _version.parse(pyarrow.__version__).major < 3:
AttributeError: 'Version' object has no attribute 'major'
- `datasets` version:
- Platform: Linux(Ubuntu) and Windows: conda on the both
- Python version: 3.7
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3688/timeline | null | completed | null | null | false | [
"Hi @Zaker237, thanks for reporting.\r\n\r\nThis is weird: the error you get is only thrown if the installed pyarrow version is less than 3.0.0.\r\n\r\nCould you please check that you install pyarrow in the same Python virtual environment where you installed datasets?\r\n\r\nFrom the Python command line (or terminal) where you get the error, please type:\r\n```\r\nimport pyarrow\r\nprint(pyarrow.__version__)\r\nimport datasets\r\nprint(datasets.__version__)\r\n``` ",
"hi @albertvillanova i try yesterday to create a new python environement with python 7 and try it on the environement and it worked. so i think that the error was not the package but may be jupyter notebook on conda. still yet i'm not yet sure but it worked in an environment created with venv",
"OK, thanks @Zaker237 for your feedback.\r\n\r\nI close this issue then. Please, feel free to reopen it if the problem arises again."
] |
https://api.github.com/repos/huggingface/datasets/issues/2128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2128/comments | https://api.github.com/repos/huggingface/datasets/issues/2128/events | https://github.com/huggingface/datasets/issues/2128 | 843,023,910 | MDU6SXNzdWU4NDMwMjM5MTA= | 2,128 | Dialogue action slot name and value are reversed in MultiWoZ 2.2 | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 1 | 2021-03-29T06:34:02Z | 2021-03-31T12:48:01Z | 2021-03-31T12:48:01Z | null | Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial!
I spot an error that the order of Dialogue action slot names and values are reversed.
https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.py#L251-L262 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2128/timeline | null | completed | null | null | false | [
"Hi\r\nGood catch ! Thanks for reporting\r\n\r\nIf you are interested in contributing, feel free to open a PR to fix this :) "
] |
https://api.github.com/repos/huggingface/datasets/issues/1194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1194/comments | https://api.github.com/repos/huggingface/datasets/issues/1194/events | https://github.com/huggingface/datasets/pull/1194 | 757,880,647 | MDExOlB1bGxSZXF1ZXN0NTMzMTY0MDcz | 1,194 | Add msr_text_compression | [] | closed | false | null | 1 | 2020-12-06T09:06:11Z | 2020-12-09T10:53:45Z | 2020-12-09T10:53:45Z | null | Add [MSR Abstractive Text Compression Dataset](https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1194/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1194.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1194",
"merged_at": "2020-12-09T10:53:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1194.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1194"
} | true | [
"the `RemoteDatasetTest ` error in the CI is fixed on master so it's fine"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.