url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.83B
| node_id
stringlengths 18
32
| number
int64 1
6.09k
| title
stringlengths 1
290
| labels
list | state
stringclasses 2
values | locked
bool 1
class | milestone
dict | comments
int64 0
54
| created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| closed_at
stringlengths 20
20
⌀ | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes | comments_text
sequence |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6031/comments | https://api.github.com/repos/huggingface/datasets/issues/6031/events | https://github.com/huggingface/datasets/issues/6031 | 1,804,183,858 | I_kwDODunzps5riaky | 6,031 | Argument type for map function changes when using `input_columns` for `IterableDataset` | [] | closed | false | null | 1 | 2023-07-14T05:11:14Z | 2023-07-14T14:44:15Z | 2023-07-14T14:44:15Z | null | ### Describe the bug
I wrote `tokenize(examples)` function as an argument for `map` function for `IterableDataset`.
It process dictionary type `examples` as a parameter.
It is used in `train_dataset = train_dataset.map(tokenize, batched=True)`
No error is raised.
And then, I found some unnecessary keys and values in `examples` so I added `input_columns` argument to `map` function to select keys and values.
It gives me an error saying
```
TypeError: tokenize() takes 1 positional argument but 3 were given.
```
The code below matters.
https://github.com/huggingface/datasets/blob/406b2212263c0d33f267e35b917f410ff6b3bc00/src/datasets/iterable_dataset.py#L687
For example, `inputs = {"a":1, "b":2, "c":3}`.
If `self.input_coluns` is `None`,
`inputs` is a dictionary type variable and `function_args` becomes a `list` of a single `dict` variable.
`function_args` becomes `[{"a":1, "b":2, "c":3}]`
Otherwise, lets say `self.input_columns = ["a", "c"]`
`[inputs[col] for col in self.input_columns]` results in `[1, 3]`.
I think it should be `[{"a":1, "c":3}]`.
I want to ask if the resulting format is intended.
Maybe I can modify `tokenize()` to have 2 parameters in this case instead of having 1 dictionary.
But this is confusing to me.
Or it should be fixed as `[{col:inputs[col] for col in self.input_columns}]`
### Steps to reproduce the bug
Run `map` function of `IterableDataset` with `input_columns` argument.
### Expected behavior
`function_args` looks better to have same format.
I think it should be `[{"a":1, "c":3}]`.
### Environment info
dataset version: 2.12
python: 3.8 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6031/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6031/timeline | null | completed | null | null | false | [
"Yes, this is intended."
] |
https://api.github.com/repos/huggingface/datasets/issues/914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/914/comments | https://api.github.com/repos/huggingface/datasets/issues/914/events | https://github.com/huggingface/datasets/pull/914 | 752,956,106 | MDExOlB1bGxSZXF1ZXN0NTI5MTM2Njk3 | 914 | Add list_github_datasets api for retrieving dataset name list in github repo | [] | closed | false | null | 4 | 2020-11-29T16:42:15Z | 2020-12-02T07:21:16Z | 2020-12-02T07:21:16Z | null | Thank you for your great effort on unifying data processing for NLP!
This pr is trying to add a new api `list_github_datasets` in the `inspect` module. The reason for it is that the current `list_datasets` api need to access https://huggingface.co/api/datasets to get a large json. However, this connection can be really slow... (I was visiting from China) and from my own experience, most of the time `requests.get` failed to download the whole json after a long wait and will trigger fault in `r.json()`.
I also noticed that the current implementation will first try to download from github, which makes me be able to smoothly run `load_dataset('squad')` in the example.
Therefore, I think it would be better if we can have an api to get the list of datasets that are available on github, and it will also improve newcomers' experience (it is a little frustrating if one cannot successfully run the first function in the README example.) before we have faster source for huggingface.co.
As for the implementation, I've added a `dataset_infos.json` file under the `datasets` folder, and it has the following structure:
```json
{
"id": "aeslc",
"folder": "datasets/aeslc",
"dataset_infos": "datasets/aeslc/dataset_infos.json"
},
...
{
"id": "json",
"folder": "datasets/json"
},
...
```
The script I used to get this file is:
```python
import json
import os
DATASETS_BASE_DIR = "/root/datasets"
DATASET_INFOS_JSON = "dataset_infos.json"
datasets = []
for item in os.listdir(os.path.join(DATASETS_BASE_DIR, "datasets")):
if os.path.isdir(os.path.join(DATASETS_BASE_DIR, "datasets", item)):
datasets.append(item)
datasets.sort()
total_ds_info = []
for ds in datasets:
ds_dir = os.path.join("datasets", ds)
ds_info_dir = os.path.join(ds_dir, DATASET_INFOS_JSON)
if os.path.isfile(os.path.join(DATASETS_BASE_DIR, ds_info_dir)):
total_ds_info.append({"id": ds,
"folder": ds_dir,
"dataset_infos": ds_info_dir})
else:
total_ds_info.append({"id": ds,
"folder": ds_dir})
with open(DATASET_INFOS_JSON, "w") as f:
json.dump(total_ds_info, f)
```
The new `dataset_infos.json` was saved as a formated json so that it will be easy to add new dataset.
When calling `list_github_datasets`, the user will get the list of dataset names in this github repo and if `with_details` is set to be `True`, they can get the url of specific dataset info.
Thank you for your time on reviewing this pr :). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/914/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/914/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/914.diff",
"html_url": "https://github.com/huggingface/datasets/pull/914",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/914.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/914"
} | true | [
"We can look into removing some of the attributes from `GET /api/datasets` to make it smaller/faster, what do you think @lhoestq?",
"> We can look into removing some of the attributes from `GET /api/datasets` to make it smaller/faster, what do you think @lhoestq?\r\n\r\nyes at least remove all the `dummy_data.zip`",
"`GET /api/datasets` should now be much faster. @zhuzilin can you check if `list_datasets` is now faster for you?",
"> `GET /api/datasets` should now be much faster. @zhuzilin can you check if `list_datasets` is now faster for you?\r\n\r\nYes, much faster! Thank you!"
] |
https://api.github.com/repos/huggingface/datasets/issues/926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/926/comments | https://api.github.com/repos/huggingface/datasets/issues/926/events | https://github.com/huggingface/datasets/pull/926 | 753,676,069 | MDExOlB1bGxSZXF1ZXN0NTI5NzA4MTcy | 926 | add inquisitive | [] | closed | false | null | 3 | 2020-11-30T17:45:22Z | 2020-12-02T13:45:22Z | 2020-12-02T13:40:13Z | null | Adding inquisitive qg dataset
More info: https://github.com/wjko2/INQUISITIVE | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/926/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/926/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/926.diff",
"html_url": "https://github.com/huggingface/datasets/pull/926",
"merged_at": "2020-12-02T13:40:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/926.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/926"
} | true | [
"`dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\nAny idea ?",
"> `dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\n> Any idea ?\r\n\r\nWe should definitely find a way to make it work with only a few articles.\r\n\r\nIf it doesn't work right now for dummy data, I guess it's because it tries to load every single article file ?\r\n\r\nIf so, then maybe you can use `os.listdir` method to first check all the data files available in the path where the `articles.tgz` file is extracted. Then you can simply iter through the data files and depending on their ID, include them in the train or test set. With this method you should be able to have only a few articles files per split in the dummy data. Does that make sense ?",
"fixed! so the issue was, `articles_ids` were prepared based on the number of files in articles dir, so for dummy data questions it was not able to load some articles due to incorrect ids and the test was failing"
] |
https://api.github.com/repos/huggingface/datasets/issues/4462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4462/comments | https://api.github.com/repos/huggingface/datasets/issues/4462/events | https://github.com/huggingface/datasets/issues/4462 | 1,265,079,347 | I_kwDODunzps5LZ5Qz | 4,462 | BigBench: NonMatchingSplitsSizesError when passing a dataset configuration parameter | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 3 | 2022-06-08T17:31:24Z | 2022-07-05T07:39:55Z | null | null | As noticed in https://github.com/huggingface/datasets/pull/4125 when a dataset config class has a parameter that reduces the number of examples (e.g. named `max_examples`), then loading the dataset and passing `max_examples` raises `NonMatchingSplitsSizesError`.
This is because it will check for expected the number of examples of the config with the same name without taking into account the `max_examples` parameter. This can be fixed by checking the expected number of examples using the **config id** instead of name. Indeed the config id corresponds to the config name + an optional suffix that depends on the config parameters | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4462/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4462/timeline | null | reopened | null | null | false | [
"Why not adding `max_examples` as part of the config name?",
"Yup it can also work, and maybe it's simpler this way. Opening a PR to fix bigbench instead of https://github.com/huggingface/datasets/pull/4463",
"Hi @lhoestq,\r\n\r\nThank you for taking a look at this issue, and proposing a solution. \r\nUnfortunately, after trying the fix in #4465 I still see the same issue.\r\n\r\nI think there is some subtlety where the config name gets overwritten somewhere when `BUILDER_CONFIGS`[(link)](https://github.com/huggingface/datasets/blob/master/datasets/bigbench/bigbench.py#L126) is defined. \r\n\r\nIf I print out the `self.config.name` in the current version (with the fix in #4465), I see just the task name, but if I comment out `BUILDER_CONFIGS`, the `num_shots` and `max_examples` gets appended as was meant by #4465.\r\n\r\nI haven't managed to track down where this happens, but I thought you might know? \r\n\r\n(Another comment on your fix: the `name` variable is used to fetch the task from the bigbench API, so modifying it causes an error if it's actually called. This can easily be fixed by having `config_name` variable in addition to the `task_name`)\r\n\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3752/comments | https://api.github.com/repos/huggingface/datasets/issues/3752/events | https://github.com/huggingface/datasets/pull/3752 | 1,142,627,889 | PR_kwDODunzps4zD1D9 | 3,752 | Update metadata JSON for cats_vs_dogs dataset | [] | closed | false | null | 0 | 2022-02-18T08:32:53Z | 2022-02-18T14:56:12Z | 2022-02-18T14:56:11Z | null | Note that the number of examples in the train split was already fixed in the dataset card.
Fix #3750. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3752/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3752/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3752.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3752",
"merged_at": "2022-02-18T14:56:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3752.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3752"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4440/comments | https://api.github.com/repos/huggingface/datasets/issues/4440/events | https://github.com/huggingface/datasets/pull/4440 | 1,258,494,469 | PR_kwDODunzps44_io_ | 4,440 | Update docs around audio and vision | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 2 | 2022-06-02T17:42:03Z | 2022-06-23T16:33:19Z | 2022-06-23T16:23:02Z | null | As part of the strategy to center the docs around the different modalities, this PR updates the quickstart to include audio and vision examples. This improves the developer experience by making audio and vision content more discoverable, enabling users working in these modalities to also quickly get started without digging too deeply into the docs.
Other changes include:
- Moved the installation guide to the Get Started section because it should be part of a user's onboarding to the library before exploring tutorials or how-to's.
- Updated the native TF code at creating a `tf.data.Dataset` because it was throwing an error. The `to_tensor()` bit was redundant and removing it fixed the error (please double-check me here!).
- Added some UI components to the quickstart so it's easier for users to navigate directly to the relevant section with context about what to expect.
- Reverted to the code tabs for content that don't have any framework-specific text. I think this saves space compared to the code blocks. We'll still use the code blocks if the `torch` text is different from the `tf` text.
Let me know what you think, especially if we should include some code samples for training a model in the audio/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4440/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4440/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4440.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4440",
"merged_at": "2022-06-23T16:23:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4440.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4440"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Let me know what you think, especially if we should include some code samples for training a model in the audio/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead?\r\n\r\nWe plan to address this with end-to-end examples (for each modality) more focused on preprocessing than the ones in the Transformers docs."
] |
https://api.github.com/repos/huggingface/datasets/issues/4941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4941/comments | https://api.github.com/repos/huggingface/datasets/issues/4941/events | https://github.com/huggingface/datasets/pull/4941 | 1,363,622,861 | PR_kwDODunzps4-dQ9F | 4,941 | Add Papers with Code ID to scifact dataset | [] | closed | false | null | 1 | 2022-09-06T17:46:37Z | 2022-09-06T18:28:17Z | 2022-09-06T18:26:01Z | null | This PR:
- adds Papers with Code ID
- forces sync between GitHub and Hub, which previously failed due to Hub validation error of the license tag: https://github.com/huggingface/datasets/runs/8200223631?check_suite_focus=true | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4941/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4941/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4941.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4941",
"merged_at": "2022-09-06T18:26:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4941.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4941"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1559/comments | https://api.github.com/repos/huggingface/datasets/issues/1559/events | https://github.com/huggingface/datasets/pull/1559 | 765,714,183 | MDExOlB1bGxSZXF1ZXN0NTM5MDQ5MTky | 1,559 | adding dataset card information to CONTRIBUTING.md | [] | closed | false | null | 0 | 2020-12-14T00:08:43Z | 2020-12-14T17:55:03Z | 2020-12-14T17:55:03Z | null | Added a documentation line and link to the full sprint guide in the "How to add a dataset" section, and a section on how to contribute to the dataset card of an existing dataset.
And a thank you note at the end :hugs: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1559/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1559/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1559.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1559",
"merged_at": "2020-12-14T17:55:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1559.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1559"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4311/comments | https://api.github.com/repos/huggingface/datasets/issues/4311/events | https://github.com/huggingface/datasets/pull/4311 | 1,231,369,438 | PR_kwDODunzps43ln8- | 4,311 | [Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly | [] | closed | false | null | 2 | 2022-05-10T15:52:15Z | 2022-05-10T17:19:42Z | 2022-05-10T17:11:47Z | null | I updated the `docs/source/image_process.mdx` documentation and added an example for image captioning and object detection using `ImageFolder`.
While doing so I also improved a few aspects:
- we don't need to infer labels from file names when there are metadata - they can just be in the metadata if necessary
- raise informative error messages when metadata and images aren't linked correctly:
- when an image is missing a metadata file
- when a metadata file is missing an image
I added some tests for these changes as well
cc @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4311/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4311/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4311.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4311",
"merged_at": "2022-05-10T17:11:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4311.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4311"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging this one since mario is off, I took care of adding some tests to make sure everything is fine. Will do the release after it"
] |
https://api.github.com/repos/huggingface/datasets/issues/1855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1855/comments | https://api.github.com/repos/huggingface/datasets/issues/1855/events | https://github.com/huggingface/datasets/pull/1855 | 805,256,579 | MDExOlB1bGxSZXF1ZXN0NTcwODkzNDY3 | 1,855 | Minor fix in the docs | [] | closed | false | null | 0 | 2021-02-10T07:27:43Z | 2021-02-10T12:33:09Z | 2021-02-10T12:33:09Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1855/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1855/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1855.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1855",
"merged_at": "2021-02-10T12:33:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1855.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1855"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/18 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/18/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/18/comments | https://api.github.com/repos/huggingface/datasets/issues/18/events | https://github.com/huggingface/datasets/pull/18 | 606,109,196 | MDExOlB1bGxSZXF1ZXN0NDA4Mzg0MTc3 | 18 | Updating caching mechanism - Allow dependency in dataset processing scripts - Fix style and quality in the repo | [] | closed | false | null | 1 | 2020-04-24T07:39:48Z | 2020-04-29T15:27:28Z | 2020-04-28T16:06:28Z | null | This PR has a lot of content (might be hard to review, sorry, in particular because I fixed the style in the repo at the same time).
# Style & quality:
You can now install the style and quality tools with `pip install -e .[quality]`. This will install black, the compatible version of sort and flake8.
You can then clean the style and check the quality before merging your PR with:
```bash
make style
make quality
```
# Allow dependencies in dataset processing scripts
We can now allow (some level) of imports in dataset processing scripts (in addition to PyPi imports).
Namely, you can do the two following things:
Import from a relative path to a file in the same folder as the dataset processing script:
```python
import .c4_utils
```
Or import from a relative path to a file in a folder/archive/github repo to which you provide an URL after the import state with `# From: [URL]`:
```python
import .clicr.dataset_code.build_json_dataset # From: https://github.com/clips/clicr
```
In both these cases, after downloading the main dataset processing script, we will identify the location of these dependencies, download them and copy them in the dataset processing script folder.
Note that only direct import in the dataset processing script will be handled.
We don't recursively explore the additional import to download further files.
Also, when we download from an additional directory (in the second case above), we recursively add `__init__.py` to all the sub-folder so you can import from them.
This part is still tested for now. If you've seen datasets which required external utilities, tell me and I can test it.
# Update the cache to have a better local structure
The local structure in the `src/datasets` folder is now: `src/datasets/DATASET_NAME/DATASET_HASH/*`
The hash is computed from the full code of the dataset processing script as well as all the local and downloaded dependencies as mentioned above. This way if you change some code in a utility related to your dataset, a new hash should be computed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/18/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/18/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/18.diff",
"html_url": "https://github.com/huggingface/datasets/pull/18",
"merged_at": "2020-04-28T16:06:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/18.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/18"
} | true | [
"LGTM"
] |
https://api.github.com/repos/huggingface/datasets/issues/509 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/509/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/509/comments | https://api.github.com/repos/huggingface/datasets/issues/509/events | https://github.com/huggingface/datasets/issues/509 | 679,711,585 | MDU6SXNzdWU2Nzk3MTE1ODU= | 509 | Converting TensorFlow dataset example | [] | closed | false | null | 2 | 2020-08-16T08:05:20Z | 2021-08-03T06:01:18Z | 2021-08-03T06:01:17Z | null | Hi,
I want to use TensorFlow datasets with this repo, I noticed you made some conversion script,
can you give a simple example of using it?
Thanks
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/509/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/509/timeline | null | completed | null | null | false | [
"Do you want to convert a dataset script to the tfds format ?\r\nIf so, we currently have a comversion script nlp/commands/convert.py but it is a conversion script that goes from tfds to nlp.\r\nI think it shouldn't be too hard to do the changes in reverse (at some manual adjustments).\r\nIf you manage to make it work in reverse, feel free to open a PR to share it with the community :)",
"In our docs: [Using a Dataset with PyTorch/Tensorflow](https://huggingface.co/docs/datasets/torch_tensorflow.html)."
] |
https://api.github.com/repos/huggingface/datasets/issues/2584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2584/comments | https://api.github.com/repos/huggingface/datasets/issues/2584/events | https://github.com/huggingface/datasets/pull/2584 | 936,049,736 | MDExOlB1bGxSZXF1ZXN0NjgyODY2Njc1 | 2,584 | wi_locness: reference latest leaderboard on codalab | [] | closed | false | null | 0 | 2021-07-02T20:26:22Z | 2021-07-05T09:06:14Z | 2021-07-05T09:06:14Z | null | The dataset's author asked me to put this codalab link into the dataset's README. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2584/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2584/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2584.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2584",
"merged_at": "2021-07-05T09:06:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2584.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2584"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4221/comments | https://api.github.com/repos/huggingface/datasets/issues/4221/events | https://github.com/huggingface/datasets/issues/4221 | 1,215,911,182 | I_kwDODunzps5IeVUO | 4,221 | Dictionary Feature | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | 2 | 2022-04-26T12:50:18Z | 2022-04-29T14:52:19Z | 2022-04-28T17:04:58Z | null | Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4221/timeline | null | completed | null | null | false | [
"Hi @jordiae,\r\n\r\nInstead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n```python\r\n\"list_of_dict_feature\": [\r\n {\r\n \"key1_in_dict\": datasets.Value(\"string\"),\r\n \"key2_in_dict\": datasets.Value(\"int32\"),\r\n ...\r\n }\r\n],\r\n```\r\n\r\nFeel free to re-open this issue if that does not work for your use case.",
"> Hi @jordiae,\r\n> \r\n> Instead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n> \r\n> ```python\r\n> \"list_of_dict_feature\": [\r\n> {\r\n> \"key1_in_dict\": datasets.Value(\"string\"),\r\n> \"key2_in_dict\": datasets.Value(\"int32\"),\r\n> ...\r\n> }\r\n> ],\r\n> ```\r\n> \r\n> Feel free to re-open this issue if that does not work for your use case.\r\n\r\nThank you"
] |
https://api.github.com/repos/huggingface/datasets/issues/3034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3034/comments | https://api.github.com/repos/huggingface/datasets/issues/3034/events | https://github.com/huggingface/datasets/issues/3034 | 1,016,759,202 | I_kwDODunzps48moOi | 3,034 | Errors loading dataset using fs = a gcsfs.GCSFileSystem | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 0 | 2021-10-05T20:07:08Z | 2021-10-05T20:26:39Z | null | null | ## Describe the bug
Cannot load dataset using a `gcsfs.GCSFileSystem`. I'm not sure if this should be a bug in `gcsfs` or here...
Basically what seems to be happening is that since datasets saves datasets as folders and folders aren't "real objects" in gcs, gcsfs raises a 404 error. There are workarounds if you use gcsfs directly to download the file, but as is I can't get `load_from_disk` to work.
## Steps to reproduce the bug
```python
from datasets import load_dataset
# load some dataset
dataset = load_dataset("squad", split="train")
# save it to gcs
import gcsfs
fs = gcsfs.GCSFileSystem(project="my-gs-project")
dataset.save_to_disk("gs://my-bucket/squad", fs=fs)
# try to load it from gcs
from datasets import load_from_disk
dataset2 = load_from_disk("my-bucket/squad", fs=fs)
```
## Expected results
`dataset2` would be a copy of `dataset` but loaded from my bucket.
## Actual results
Long traceback but essentially it's a 404 error from gcsfs saying the object `my-bucket/squad` doesn't exist when this is called:
https://github.com/huggingface/datasets/blob/9c81b7d2e6d9feae69a084a3abda265a4ca07fb5/src/datasets/arrow_dataset.py#L977
This is because there is no actual object called `my-bucket/squad`, there are objects called `my-bucket/squad/dataset.arrow`, etc.
Note that *this* works fine, since it's explicitly saying "download all the objects with this prefix":
```python
fs.download(src_dataset_path + "/*", dataset_path.as_posix(), recursive=True)
```
For example, I can do a workaround this way:
```python
import tempfile
with tempfile.TemporaryDirectory() as temppath:
fs.download("gs://my-bucket/squad/*", temppath)
dataset2 = load_from_disk(temppath)
```
It's unclear to me if it's `gcsfs`'s responsibility to say "hey that's folder not a file, I should try to get objects inside of it not the object itself", or if that's `datasets`'s responsibility... I'm leaning towards the latter since you're never loading a dataset from one file using this function/method, only a dataset folder?
Another minor thing that should maybe should be rolled into this bug...
https://github.com/huggingface/datasets/blob/9c81b7d2e6d9feae69a084a3abda265a4ca07fb5/src/datasets/arrow_dataset.py#L968
These fail if you pass in a `gs://` path, e.g.
```python
dataset2 = load_from_disk("gs://my-bucket/squad", fs=fs)
```
Because at this point, `dataset_info_path` is `gs:/my-bucket/squad/dataset_info.json`, gcsfs throws a:
```
Invalid bucket name: 'gs:'
```
error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: macOS Big Sur 11.6
- Python version: 3.7.12
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3034/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3034/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2323 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2323/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2323/comments | https://api.github.com/repos/huggingface/datasets/issues/2323/events | https://github.com/huggingface/datasets/issues/2323 | 876,438,507 | MDU6SXNzdWU4NzY0Mzg1MDc= | 2,323 | load_dataset("timit_asr") gives back duplicates of just one sample text | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-05-05T13:14:48Z | 2021-05-07T10:32:30Z | 2021-05-07T10:32:30Z | null | ## Describe the bug
When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasantly situated near the shore." 1680 times.
I tried to work around the issue by downgrading to datasets version 1.3.0, inspired by [this post](https://www.gitmemory.com/issue/huggingface/datasets/2052/798904836) and removing the entire huggingface directory from ~/.cache, but I still get the same issue.
## Steps to reproduce the bug
```python
from datasets import load_dataset
timit = load_dataset("timit_asr")
print(timit['train']['text'])
print(timit['test']['text'])
```
## Expected Result
Rows of diverse text, like how it is shown in the [wav2vec2.0 tutorial](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb)
<img width="485" alt="Screen Shot 2021-05-05 at 9 09 57 AM" src="https://user-images.githubusercontent.com/33647474/117146094-d9b77f00-ad81-11eb-8306-f281850c127a.png">
## Actual results
Rows of repeated text.
<img width="319" alt="Screen Shot 2021-05-05 at 9 11 53 AM" src="https://user-images.githubusercontent.com/33647474/117146231-f8b61100-ad81-11eb-834a-fc10410b0c9c.png">
## Versions
- Datasets: 1.3.0
- Python: 3.9.1
- Platform: macOS-11.2.1-x86_64-i386-64bit}
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2323/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2323/timeline | null | completed | null | null | false | [
"Upgrading datasets to version 1.6 fixes the issue",
"This bug was fixed in #1995. Upgrading the `datasets` should work! ",
"Thanks @ekeleshian for having reported.\r\n\r\nI am closing this issue once that you updated `datasets`. Feel free to reopen it if the problem persists."
] |
https://api.github.com/repos/huggingface/datasets/issues/1228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1228/comments | https://api.github.com/repos/huggingface/datasets/issues/1228/events | https://github.com/huggingface/datasets/pull/1228 | 758,049,068 | MDExOlB1bGxSZXF1ZXN0NTMzMjg1ODI2 | 1,228 | add opus_100 dataset | [] | closed | false | null | 1 | 2020-12-06T23:17:24Z | 2020-12-09T14:54:00Z | 2020-12-09T14:54:00Z | null | This PR will add [opus100 dataset](http://opus.nlpl.eu/opus-100.php). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1228/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1228/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1228.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1228",
"merged_at": "2020-12-09T14:53:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1228.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1228"
} | true | [
"done."
] |
https://api.github.com/repos/huggingface/datasets/issues/3692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3692/comments | https://api.github.com/repos/huggingface/datasets/issues/3692/events | https://github.com/huggingface/datasets/pull/3692 | 1,128,320,004 | PR_kwDODunzps4yShiu | 3,692 | Update data URL in pubmed dataset | [] | closed | false | null | 2 | 2022-02-09T10:06:21Z | 2022-02-14T14:15:42Z | 2022-02-14T14:15:41Z | null | Fix #3655. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3692/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3692/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3692.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3692",
"merged_at": "2022-02-14T14:15:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3692.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3692"
} | true | [
"- I updated the previous dummy data: I just had to rename the file and its directory\r\n - the dummy data zip contains only a single file: `pubmed22n0001.xml.gz`\r\n\r\nThen I discover it fails: https://app.circleci.com/pipelines/github/huggingface/datasets/9800/workflows/173a4433-8feb-4fc6-ab9e-59762084e3e1/jobs/60437\r\n```\r\nNo such file or directory: '.../dummy_data/pubmed22n0002.xml.gz'\r\n```\r\n- it needs dummy data for all the 1114 files: \r\n `_URLs = [f\"ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed22n{i:04d}.xml.gz\" for i in range(1, 1115)]`\r\n- this confirms me that it never passed the test: these dummy data files were not present before my PR\r\n- therefore, is it really useful the data test if we just ignore it when it does not pass?\r\n\r\nIn relation with JSON metadata, I was generating the file for `pubmed` (see above) in a GCP instance: after running during ~12h without finishing, I decided to stop the process.",
"Hi ! Yes I remembered we hardcoded an exception for this one:\r\nhttps://github.com/huggingface/datasets/blob/36db39c75179a0a491c69a4491f7ae7e4615e66f/src/datasets/utils/mock_download_manager.py#L174-L176\r\n\r\nThe exception was used to only require one dummy data file, feel free to update it if you want"
] |
https://api.github.com/repos/huggingface/datasets/issues/4018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4018/comments | https://api.github.com/repos/huggingface/datasets/issues/4018/events | https://github.com/huggingface/datasets/pull/4018 | 1,180,622,816 | PR_kwDODunzps41Aj7g | 4,018 | Replace yelp_review_full data url | [] | closed | false | null | 1 | 2022-03-25T10:37:18Z | 2022-03-25T15:01:02Z | 2022-03-25T14:56:10Z | null | I replaced the Google Drive URL of the Yelp review dataset by the FastAI one, since we've had some issues with Google Drive.
Close https://github.com/huggingface/datasets/issues/4005 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4018/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4018/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4018.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4018",
"merged_at": "2022-03-25T14:56:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4018.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4018"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4932/comments | https://api.github.com/repos/huggingface/datasets/issues/4932/events | https://github.com/huggingface/datasets/issues/4932 | 1,362,522,423 | I_kwDODunzps5RNnE3 | 4,932 | Dataset Viewer issue for bigscience-biomedical/biosses | [] | closed | false | null | 4 | 2022-09-05T22:40:32Z | 2022-09-06T14:24:56Z | 2022-09-06T14:24:56Z | null | ### Link
https://huggingface.co/datasets/bigscience-biomedical/biosses
### Description
I've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) .
```
Status code: 400
Exception: ModuleNotFoundError
Message: No module named 'datasets_modules.datasets.bigscience-biomedical--biosses.ddbd5893bf6c2f4db06f407665eaeac619520ba41f69d94ead28f7cc5b674056.bigbiohub'
```
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4932/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4932/timeline | null | completed | null | null | false | [
"Possibly not related to the dataset viewer in itself. cc @huggingface/datasets.\r\n\r\nIn particular, I think that the import of bigbiohub is not working here: https://huggingface.co/datasets/bigscience-biomedical/biosses/blob/main/biosses.py#L29 (requires a relative path?)\r\n\r\n```python\r\n>>> from datasets import get_dataset_config_names\r\n>>> get_dataset_config_names('bigscience-biomedical/biosses')\r\nDownloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8.00k/8.00k [00:00<00:00, 7.47MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 289, in get_dataset_config_names\r\n dataset_module = dataset_module_factory(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1247, in dataset_module_factory\r\n raise e1 from None\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1220, in dataset_module_factory\r\n return HubDatasetModuleFactoryWithScript(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 931, in get_module\r\n local_imports = _download_additional_modules(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 215, in _download_additional_modules\r\n raise ImportError(\r\nImportError: To be able to use bigscience-biomedical/biosses, you need to install the following dependency: bigbiohub.\r\nPlease install it using 'pip install bigbiohub' for instance'\r\n```",
"Opened a PR here to (hopefully) fix the dataset script: https://huggingface.co/datasets/bigscience-biomedical/biosses/discussions/1/files",
"thanks for taking a look @severo . agree this isn't related to dataset viewer (sorry just clicked on the auto issue creator). also thanks @lhoestq , I see the format to use for relative imports. was a bit confused b/c it seems to be working here \r\n\r\nhttps://huggingface.co/datasets/bigscience-biomedical/scitail/blob/main/scitail.py#L31\r\n\r\nI'll try this PR a see what happens. ",
"closing as I think the issue is relative imports and attempting to read json files directly in the repo (thanks again @lhoestq ) "
] |
https://api.github.com/repos/huggingface/datasets/issues/5050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5050/comments | https://api.github.com/repos/huggingface/datasets/issues/5050/events | https://github.com/huggingface/datasets/issues/5050 | 1,392,381,882 | I_kwDODunzps5S_g-6 | 5,050 | Restore saved format state in `load_from_disk` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | 2 | 2022-09-30T12:40:07Z | 2022-10-11T16:49:24Z | 2022-10-11T16:49:24Z | null | Even though we save the `format` state in `save_to_disk`, we don't restore it in `load_from_disk`. We should fix that.
Reported here: https://discuss.huggingface.co/t/save-to-disk-loses-formatting-information/23815 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5050/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5050/timeline | null | completed | null | null | false | [
"Hi, can I work on this?",
"Hi, sure! Let us know if you need some pointers/help."
] |
https://api.github.com/repos/huggingface/datasets/issues/3165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3165/comments | https://api.github.com/repos/huggingface/datasets/issues/3165/events | https://github.com/huggingface/datasets/issues/3165 | 1,036,448,998 | I_kwDODunzps49xvTm | 3,165 | Deprecate prepare_module | [] | closed | false | null | 0 | 2021-10-26T15:27:15Z | 2021-11-05T09:27:36Z | 2021-11-05T09:27:36Z | null | In version 1.13, `prepare_module` was deprecated.
Add deprecation warning and remove its usage from all the library. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3165/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3165/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5672 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5672/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5672/comments | https://api.github.com/repos/huggingface/datasets/issues/5672/events | https://github.com/huggingface/datasets/issues/5672 | 1,641,005,322 | I_kwDODunzps5hz8EK | 5,672 | Pushing dataset to hub crash | [] | closed | false | null | 3 | 2023-03-26T17:42:13Z | 2023-03-30T08:11:05Z | 2023-03-30T08:11:05Z | null | ### Describe the bug
Uploading a dataset with `push_to_hub()` fails without error description.
### Steps to reproduce the bug
Hey there,
I've built a image dataset of 100k images + text pair as described here https://huggingface.co/docs/datasets/image_dataset#imagefolder
Now I'm trying to push it to the hub but I'm running into issues. First, I tried doing it via git directly, I added all the files in git lfs and pushed but I got hit with an error saying huggingface only accept up to 10k files in a folder.
So I'm now trying with the `push_to_hub()` func as follow:
```python
from datasets import load_dataset
import os
dataset = load_dataset("imagefolder", data_dir="./data", split="train")
dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN'))
```
But again, this produces an error:
```
Resolving data files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 100212/100212 [00:00<00:00, 439108.61it/s]
Downloading and preparing dataset imagefolder/default to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f...
Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 100211/100211 [00:00<00:00, 149323.73it/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15947.92it/s]
Extracting data files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2245.34it/s]
Dataset imagefolder downloaded and prepared to /home/contact_theochampion/.cache/huggingface/datasets/imagefolder/default-20567ffc703aa314/0.0.0/37fbb85cc714a338bea574ac6c7d0b5be5aff46c1862c1989b20e0771199e93f. Subsequent calls will reuse this data.
Resuming upload of the dataset shards.
Pushing dataset shards to the dataset hub: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 14/14 [00:31<00:00, 2.24s/it]
Downloading metadata: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [00:00<00:00, 225kB/s]
Traceback (most recent call last):
File "/home/contact_theochampion/organization-logos/push_to_hub.py", line 5, in <module>
dataset.push_to_hub("tzvc/organization-logos", token=os.environ.get('HF_TOKEN'))
File "/home/contact_theochampion/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 5245, in push_to_hub
repo_info = dataset_infos[next(iter(dataset_infos))]
StopIteration
```
What could be happening here ?
### Expected behavior
The dataset is pushed to the hub
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.10.0-21-cloud-amd64-x86_64-with-glibc2.31
- Python version: 3.9.2
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5672/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5672/timeline | null | completed | null | null | false | [
"Hi ! It's been fixed by https://github.com/huggingface/datasets/pull/5598. We're doing a new release tomorrow with the fix and you'll be able to push your 100k images ;)\r\n\r\nBasically `push_to_hub` used to fail if the remote repository already exists and has a README.md without dataset_info in the YAML tags.\r\n\r\nIn the meantime you can install datasets from source",
"Hi @lhoestq ,\r\n\r\nWhat version of datasets library fix this case? I am using the last `v2.10.1` and I get the same error.",
"We just released 2.11 which includes a fix :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/742/comments | https://api.github.com/repos/huggingface/datasets/issues/742/events | https://github.com/huggingface/datasets/pull/742 | 724,509,974 | MDExOlB1bGxSZXF1ZXN0NTA1ODgzNjI3 | 742 | Add OCNLI, a new CLUE dataset | [] | closed | false | null | 1 | 2020-10-19T11:06:33Z | 2020-10-22T16:19:49Z | 2020-10-22T16:19:48Z | null | OCNLI stands for Original Chinese Natural Language Inference. It is a corpus for
Chinese Natural Language Inference, collected following closely the procedures of MNLI,
but with enhanced strategies aiming for more challenging inference pairs. We want to
emphasize we did not use human/machine translation in creating the dataset, and thus
our Chinese texts are original and not translated. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/742/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/742/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/742.diff",
"html_url": "https://github.com/huggingface/datasets/pull/742",
"merged_at": "2020-10-22T16:19:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/742.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/742"
} | true | [
"Thanks :) merging it"
] |
https://api.github.com/repos/huggingface/datasets/issues/2119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2119/comments | https://api.github.com/repos/huggingface/datasets/issues/2119/events | https://github.com/huggingface/datasets/pull/2119 | 841,567,199 | MDExOlB1bGxSZXF1ZXN0NjAxMjg2MjIy | 2,119 | copy.deepcopy os.environ instead of copy | [] | closed | false | null | 0 | 2021-03-26T03:58:38Z | 2021-03-26T15:13:52Z | 2021-03-26T15:13:52Z | null | Fixes: https://github.com/huggingface/datasets/issues/2115
- bug fix: using envrion.copy() returns a dict.
- using deepcopy(environ) returns an `_environ` object
- Changing the datatype of the _environ object can break code, if subsequent libraries perform operations using apis exclusive to the environ object, like `environ.getenv()` for example.
Testing:
Tested the change on my terminal:
```
>>> import os
>>> x = deepcopy(os.environ)
>>> y = os.environ
>>> x is y
False
>>> isinstance(x, type(os.environ))
True
>>> z = os.environ.copy()
>>> isinstance(z, type(os.environ))
False
>>> isinstance(z, dict)
True
``` | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2119/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2119",
"merged_at": "2021-03-26T15:13:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2119"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4656/comments | https://api.github.com/repos/huggingface/datasets/issues/4656/events | https://github.com/huggingface/datasets/issues/4656 | 1,296,740,266 | I_kwDODunzps5NSq-q | 4,656 | Add Amazon-QA Dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2022-07-07T03:15:11Z | 2022-07-14T02:20:12Z | 2022-07-14T02:20:12Z | null | ## Adding a Dataset
- **Name:** *Amazon-QA*
- **Description:** *The dataset is .jsonl format, where each line in the file is a json string that corresponds to a question, existing answers to the question and the extracted review snippets (relevant to the question).*
- **Paper:** *https://github.com/amazonqa/amazonqa/tree/master/paper*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/amazon-qa.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4656/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4656/timeline | null | completed | null | null | false | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/Amazon-QA)."
] |
https://api.github.com/repos/huggingface/datasets/issues/5147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5147/comments | https://api.github.com/repos/huggingface/datasets/issues/5147/events | https://github.com/huggingface/datasets/issues/5147 | 1,419,522,275 | I_kwDODunzps5UnDDj | 5,147 | Allow ignoring kwargs inside fn_kwargs during dataset.map's fingerprinting | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 4 | 2022-10-22T21:46:38Z | 2022-11-01T22:19:07Z | null | null | ### Feature request
`dataset.map` accepts a `fn_kwargs` that is passed to `fn`. Currently, the whole `fn_kwargs` is used by `fingerprint_transform` to calculate the new fingerprint.
I'd like to be able to inform `fingerprint_transform` which `fn_kwargs` shoud/shouldn't be taken into account during hashing.
Of course, users should be aware to properly use this new feature, just like the internal usages of `fingerprint_transform` [does](https://github.com/huggingface/datasets/blob/2699593b33ee63d17aad2a2bfddedd38a8df57b8/src/datasets/arrow_dataset.py#L2700).
### Motivation
This is originally motivated by https://github.com/huggingface/transformers/pull/18351#issuecomment-1263588680.
Nonetheless, consider a more general processing function that accepts a kwarg that does not influence it's output:
```python
def fn(example, verbose=False):
...
```
Then `dataset.map(fn, verbose=True)` would not benefit from dataset caching.
I'm not sure if other methods in the `Dataset` API could benefit from this feature.
### Your contribution
Based on `fingerprint_transform `'s `wrapper` function [here](https://github.com/huggingface/datasets/blob/c59cc34fcd2a369d27b77cc678017f5976a926a9/src/datasets/fingerprint.py#L443), it seems to me that it should be possible to make `.map`/`._map_single` accept something like `fn_use_fingerprint_kwargs`/`fn_ignore_fingerprint_kwargs` (probably another arg name). This would then be used by `fingerprint_transform.wrapper` to better/more flexibly hash the transformation.
I could contribute with a PR if this feature and approach look good to you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5147/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5147/timeline | null | null | null | null | false | [
"Hi ! In the `transformers` issue the object to not hash is a `Pool` - I think you can instantiate it inside your function instead of passing it as a parameter. It's good practice that your function and all its fn_kwargs are picklable, in case you want to parallelize `map` using `num_proc>1`\r\n\r\nFor the other case `def fn(example, verbose=False):` however, I agree it would be nice to let the user specify that \"verbose\" needs to be ignored.\r\n\r\nDo you think providing a decorator could help ? Maybe\r\n```python\r\[email protected](ignore_kwargs=[\"verbose\"])\r\ndef func(example, verbose=False):\r\n ...\r\n```",
"Hi @lhoestq! Thanks for your response.\r\n\r\nA `Pool` shouldn't be instantiated within the function, because there's a huge overhead in doing so. The main idea is that the same `Pool` should be used across all function calls. Parallel `map` is not helpful/desired in that specific scenario, because the heavy parallel computation is done by another lib (`pyctcdecode`, called within `transformer`'s model inference code).\r\n\r\nBut yes, it makes sense to be able to leverage parallel processing by just doing `num_proc>1` when possible.\r\n\r\nYour decorator suggestions seems like a pretty clean API to me. I didn't find a `datasets.hashing` module though. Would it be created for this specific purpose? Any downsides in just using `datasets.fingerprint`?\r\n\r\nAnd would `datasets.hashing.register` just add some metadata to `func` in your approach (so it could be inspected from `fingerprint_transform`)?\r\n\r\nAnd looking to the `datasets.Dataset` API, `.filter` would also benefited from this.",
"> Would it be created for this specific purpose? Any downsides in just using datasets.fingerprint?\r\n\r\nThis can also go in datasets.fingerprint indeed - but maybe datasets.hashing tells more about what the register function does (i.e. register this function to have a custom hashing) ?\r\n\r\n> And would datasets.hashing.register just add some metadata to func in your approach (so it could be inspected from fingerprint_transform)?\r\n\r\nYup that's the idea :)\r\n\r\n> And looking to the datasets.Dataset API, .filter would also benefited from this.\r\n\r\nIndeed !\r\n\r\n-----\r\n\r\nIf you would like to contribute this you can assign yourself to this issue by posting #self-assign\r\nAnd of course if you have questions or if I can help, feel free to ping me !",
"> This can also go in datasets.fingerprint indeed - but maybe datasets.hashing tells more about what the register function does (i.e. register this function to have a custom hashing) ?\r\n\r\nSure, it makes sense.\r\n\r\n---\r\n\r\nI don't plan to work on it right now, so I'll let it unassigned in case somebody wants to join. I'll get back at it as soon as possible though.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3453 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3453/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3453/comments | https://api.github.com/repos/huggingface/datasets/issues/3453/events | https://github.com/huggingface/datasets/issues/3453 | 1,084,515,911 | I_kwDODunzps5ApGZH | 3,453 | ValueError while iter_archive | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-12-20T08:46:18Z | 2021-12-20T10:04:59Z | 2021-12-20T10:04:59Z | null | ## Describe the bug
After the merge of:
- #3443
the method `iter_archive` throws a ValueError:
```
ValueError: read of closed file
```
## Steps to reproduce the bug
```python
for path, file in dl_manager.iter_archive(archive_path):
pass
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3453/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3453/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5641 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5641/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5641/comments | https://api.github.com/repos/huggingface/datasets/issues/5641/events | https://github.com/huggingface/datasets/issues/5641 | 1,625,942,730 | I_kwDODunzps5g6erK | 5,641 | Features cannot be named "self" | [] | closed | false | null | 0 | 2023-03-15T17:16:40Z | 2023-03-16T17:14:51Z | 2023-03-16T17:14:51Z | null | ### Describe the bug
Hi,
I noticed that we cannot create a HuggingFace dataset from Pandas DataFrame with a column named `self`.
The error seems to be coming from arguments validation in the `Features.from_dict` function.
### Steps to reproduce the bug
```python
import datasets
dummy_pandas = pd.DataFrame([0,1,2,3], columns = ["self"])
datasets.arrow_dataset.Dataset.from_pandas(dummy_pandas)
```
### Expected behavior
No error thrown
### Environment info
- `datasets` version: 2.8.0
- Python version: 3.9.5
- PyArrow version: 6.0.1
- Pandas version: 1.4.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5641/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5641/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3338 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3338/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3338/comments | https://api.github.com/repos/huggingface/datasets/issues/3338/events | https://github.com/huggingface/datasets/pull/3338 | 1,066,371,235 | PR_kwDODunzps4vJRFM | 3,338 | [WIP] Add doctests for tutorials | [] | closed | false | null | 1 | 2021-11-29T18:40:46Z | 2023-05-05T17:18:20Z | 2023-05-05T17:18:15Z | null | Opening a PR as discussed with @LysandreJik for some help with doctest issues. The goal is to add doctests for each of the tutorials in the documentation to make sure the code samples work as shown.
### Issues
A doctest has been added in the docstring of the `load_dataset_builder` function in `load.py` to handle variable outputs with the `ELLIPSIS` directive. When I run doctest on the `load_hub.rst` file, doctest should recognize the expected output from the docstring, and the corresponding code sample in `load_hub.rst` should pass. I am having the same issue with handling tracebacks in the `load_dataset` function.
From the docstring:
```
>>> dataset_builder.cache_dir #doctest: +ELLIPSIS
/Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...
```
Test result:
```
Failed example:
dataset_builder.cache_dir
Expected:
/Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...
Got:
/Users/steven/.cache/huggingface/datasets/imdb/plain_text/1.0.0/2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1
```
I am able to get the doctest to pass by adding the doctest directives (`ELLIPSIS` and `NORMALIZE_WHITESPACE`) to the code samples in the `rst` file directly. But my understanding is that these directives should also work in the docstrings of the functions. I am running the test from the root of the directory:
```
python -m doctest -v docs/source/load_hub.rst
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3338/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3338/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/3338.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3338",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3338.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3338"
} | true | [
"I manage to remove the mentions of ellipsis in the code by launching the command as follows:\r\n\r\n```\r\npython -m doctest -v docs/source/load_hub.rst -o=ELLIPSIS\r\n```\r\n\r\nThe way you put your ellipsis will only work on mac, I've adapted it for linux as well with the following:\r\n\r\n```diff\r\n >>> from datasets import load_dataset_builder\r\n >>> dataset_builder = load_dataset_builder('imdb')\r\n- >>> print(dataset_builder.cache_dir) #doctest: +ELLIPSIS\r\n- /Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...\r\n+ >>> print(dataset_builder.cache_dir)\r\n+ /.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...\r\n```\r\n\r\nThis passes on my machine:\r\n\r\n```\r\nTrying:\r\n print(dataset_builder.cache_dir)\r\nExpecting:\r\n /.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/...\r\nok\r\n```\r\n\r\nI'm getting a last error:\r\n\r\n```py\r\nExpected:\r\n DatasetDict({\r\n train: Dataset({\r\n features: ['sentence1', 'sentence2', 'label', 'idx'],\r\n num_rows: 3668\r\n })\r\n validation: Dataset({\r\n features: ['sentence1', 'sentence2', 'label', 'idx'],\r\n num_rows: 408\r\n })\r\n test: Dataset({\r\n features: ['sentence1', 'sentence2', 'label', 'idx'],\r\n num_rows: 1725\r\n })\r\n })\r\nGot:\r\n DatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'label', 'sentence1', 'sentence2'],\r\n num_rows: 3668\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'label', 'sentence1', 'sentence2'],\r\n num_rows: 408\r\n })\r\n test: Dataset({\r\n features: ['idx', 'label', 'sentence1', 'sentence2'],\r\n num_rows: 1725\r\n })\r\n })\r\n```\r\n\r\nBut this is due to `doctest` looking for an exact match and the list having an unordered print order. I wish `doctest` would be a bit more flexible with that."
] |
https://api.github.com/repos/huggingface/datasets/issues/2605 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2605/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2605/comments | https://api.github.com/repos/huggingface/datasets/issues/2605/events | https://github.com/huggingface/datasets/pull/2605 | 938,648,164 | MDExOlB1bGxSZXF1ZXN0Njg0OTkyODIz | 2,605 | Make any ClientError trigger retry in streaming mode (e.g. ClientOSError) | [] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 0 | 2021-07-07T08:47:23Z | 2021-07-12T14:10:27Z | 2021-07-07T08:59:13Z | null | During the FLAX sprint some users have this error when streaming datasets:
```python
aiohttp.client_exceptions.ClientOSError: [Errno 104] Connection reset by peer
```
This error must trigger a retry instead of directly crashing
Therefore I extended the error type that triggers the retry to be the base aiohttp error type: `ClientError`
In particular both `ClientOSError` and `ServerDisconnectedError` inherit from `ClientError`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2605/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2605/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2605.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2605",
"merged_at": "2021-07-07T08:59:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2605.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2605"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/187/comments | https://api.github.com/repos/huggingface/datasets/issues/187/events | https://github.com/huggingface/datasets/issues/187 | 623,627,800 | MDU6SXNzdWU2MjM2Mjc4MDA= | 187 | [Question] How to load wikipedia ? Beam runner ? | [] | closed | false | null | 2 | 2020-05-23T10:18:52Z | 2020-05-25T00:12:02Z | 2020-05-25T00:12:02Z | null | When `nlp.load_dataset('wikipedia')`, I got
* `WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be used.`
* `AttributeError: 'NoneType' object has no attribute 'size'`
Could somebody tell me what should I do ?
# Env
On Colab,
```
git clone https://github.com/huggingface/nlp
cd nlp
pip install -q .
```
```
%pip install -q apache_beam mwparserfromhell
-> ERROR: pydrive 1.3.1 has requirement oauth2client>=4.0.0, but you'll have oauth2client 3.0.0 which is incompatible.
ERROR: google-api-python-client 1.7.12 has requirement httplib2<1dev,>=0.17.0, but you'll have httplib2 0.12.0 which is incompatible.
ERROR: chainer 6.5.0 has requirement typing-extensions<=3.6.6, but you'll have typing-extensions 3.7.4.2 which is incompatible.
```
```
pip install -q apache-beam[interactive]
ERROR: google-colab 1.0.0 has requirement ipython~=5.5.0, but you'll have ipython 5.10.0 which is incompatible.
```
# The whole message
```
WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be used.
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wikipedia/20200501.aa/1.0.0...
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
44 frames
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()
/usr/local/lib/python3.6/dist-packages/apache_beam/io/iobase.py in process(self, element, init_result)
1081 writer.write(e)
-> 1082 return [window.TimestampedValue(writer.close(), timestamp.MAX_TIMESTAMP)]
1083
/usr/local/lib/python3.6/dist-packages/apache_beam/io/filebasedsink.py in close(self)
422 def close(self):
--> 423 self.sink.close(self.temp_handle)
424 return self.temp_shard_path
/usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in close(self, writer)
537 if len(self._buffer[0]) > 0:
--> 538 self._flush_buffer()
539 if self._record_batches_byte_size > 0:
/usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in _flush_buffer(self)
569 for b in x.buffers():
--> 570 size = size + b.size
571 self._record_batches_byte_size = self._record_batches_byte_size + size
AttributeError: 'NoneType' object has no attribute 'size'
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
<ipython-input-9-340aabccefff> in <module>()
----> 1 dset = nlp.load_dataset('wikipedia')
/usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
518 download_mode=download_mode,
519 ignore_verifications=ignore_verifications,
--> 520 save_infos=save_infos,
521 )
522
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
370 verify_infos = not save_infos and not ignore_verifications
371 self._download_and_prepare(
--> 372 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
373 )
374 # Sync info
/usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
770 with beam.Pipeline(runner=beam_runner, options=beam_options,) as pipeline:
771 super(BeamBasedBuilder, self)._download_and_prepare(
--> 772 dl_manager, pipeline=pipeline, verify_infos=False
773 ) # TODO{beam} verify infos
774
/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in __exit__(self, exc_type, exc_val, exc_tb)
501 def __exit__(self, exc_type, exc_val, exc_tb):
502 if not exc_type:
--> 503 self.run().wait_until_finish()
504
505 def visit(self, visitor):
/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api)
481 return Pipeline.from_runner_api(
482 self.to_runner_api(use_fake_coders=True), self.runner,
--> 483 self._options).run(False)
484
485 if self._options.view_as(TypeOptions).runtime_type_check:
/usr/local/lib/python3.6/dist-packages/apache_beam/pipeline.py in run(self, test_runner_api)
494 finally:
495 shutil.rmtree(tmpdir)
--> 496 return self.runner.run_pipeline(self, self._options)
497
498 def __enter__(self):
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/direct/direct_runner.py in run_pipeline(self, pipeline, options)
128 runner = BundleBasedDirectRunner()
129
--> 130 return runner.run_pipeline(pipeline, options)
131
132
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_pipeline(self, pipeline, options)
553
554 self._latest_run_result = self.run_via_runner_api(
--> 555 pipeline.to_runner_api(default_environment=self._default_environment))
556 return self._latest_run_result
557
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_via_runner_api(self, pipeline_proto)
563 # TODO(pabloem, BEAM-7514): Create a watermark manager (that has access to
564 # the teststream (if any), and all the stages).
--> 565 return self.run_stages(stage_context, stages)
566
567 @contextlib.contextmanager
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in run_stages(self, stage_context, stages)
704 stage,
705 pcoll_buffers,
--> 706 stage_context.safe_coders)
707 metrics_by_stage[stage.name] = stage_results.process_bundle.metrics
708 monitoring_infos_by_stage[stage.name] = (
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in _run_stage(self, worker_handler_factory, pipeline_components, stage, pcoll_buffers, safe_coders)
1071 cache_token_generator=cache_token_generator)
1072
-> 1073 result, splits = bundle_manager.process_bundle(data_input, data_output)
1074
1075 def input_for(transform_id, input_id):
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs)
2332
2333 with UnboundedThreadPoolExecutor() as executor:
-> 2334 for result, split_result in executor.map(execute, part_inputs):
2335
2336 split_result_list += split_result
/usr/lib/python3.6/concurrent/futures/_base.py in result_iterator()
584 # Careful not to keep a reference to the popped future
585 if timeout is None:
--> 586 yield fs.pop().result()
587 else:
588 yield fs.pop().result(end_time - time.monotonic())
/usr/lib/python3.6/concurrent/futures/_base.py in result(self, timeout)
430 raise CancelledError()
431 elif self._state == FINISHED:
--> 432 return self.__get_result()
433 else:
434 raise TimeoutError()
/usr/lib/python3.6/concurrent/futures/_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
/usr/local/lib/python3.6/dist-packages/apache_beam/utils/thread_pool_executor.py in run(self)
42 # If the future wasn't cancelled, then attempt to execute it.
43 try:
---> 44 self._future.set_result(self._fn(*self._fn_args, **self._fn_kwargs))
45 except BaseException as exc:
46 # Even though Python 2 futures library has #set_exection(),
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in execute(part_map)
2329 self._registered,
2330 cache_token_generator=self._cache_token_generator)
-> 2331 return bundle_manager.process_bundle(part_map, expected_outputs)
2332
2333 with UnboundedThreadPoolExecutor() as executor:
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in process_bundle(self, inputs, expected_outputs)
2243 process_bundle_descriptor_id=self._bundle_descriptor.id,
2244 cache_tokens=[next(self._cache_token_generator)]))
-> 2245 result_future = self._worker_handler.control_conn.push(process_bundle_req)
2246
2247 split_results = [] # type: List[beam_fn_api_pb2.ProcessBundleSplitResponse]
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/portability/fn_api_runner.py in push(self, request)
1557 self._uid_counter += 1
1558 request.instruction_id = 'control_%s' % self._uid_counter
-> 1559 response = self.worker.do_instruction(request)
1560 return ControlFuture(request.instruction_id, response)
1561
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in do_instruction(self, request)
413 # E.g. if register is set, this will call self.register(request.register))
414 return getattr(self, request_type)(
--> 415 getattr(request, request_type), request.instruction_id)
416 else:
417 raise NotImplementedError
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/sdk_worker.py in process_bundle(self, request, instruction_id)
448 with self.maybe_profile(instruction_id):
449 delayed_applications, requests_finalization = (
--> 450 bundle_processor.process_bundle(instruction_id))
451 monitoring_infos = bundle_processor.monitoring_infos()
452 monitoring_infos.extend(self.state_cache_metrics_fn())
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/bundle_processor.py in process_bundle(self, instruction_id)
837 for data in data_channel.input_elements(instruction_id,
838 expected_transforms):
--> 839 input_op_by_transform_id[data.transform_id].process_encoded(data.data)
840
841 # Finish all operations.
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/bundle_processor.py in process_encoded(self, encoded_windowed_values)
214 decoded_value = self.windowed_coder_impl.decode_from_stream(
215 input_stream, True)
--> 216 self.output(decoded_value)
217
218 def try_split(self, fraction_of_remainder, total_buffer_size):
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.Operation.output()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.SingletonConsumerSet.receive()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/worker/operations.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.worker.operations.DoOperation.process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner._reraise_augmented()
/usr/local/lib/python3.6/dist-packages/future/utils/__init__.py in raise_with_traceback(exc, traceback)
417 if traceback == Ellipsis:
418 _, _, traceback = sys.exc_info()
--> 419 raise exc.with_traceback(traceback)
420
421 else:
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.DoFnRunner.process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker.invoke_process()
/usr/local/lib/python3.6/dist-packages/apache_beam/runners/common.cpython-36m-x86_64-linux-gnu.so in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window()
/usr/local/lib/python3.6/dist-packages/apache_beam/io/iobase.py in process(self, element, init_result)
1080 for e in bundle[1]: # values
1081 writer.write(e)
-> 1082 return [window.TimestampedValue(writer.close(), timestamp.MAX_TIMESTAMP)]
1083
1084
/usr/local/lib/python3.6/dist-packages/apache_beam/io/filebasedsink.py in close(self)
421
422 def close(self):
--> 423 self.sink.close(self.temp_handle)
424 return self.temp_shard_path
/usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in close(self, writer)
536 def close(self, writer):
537 if len(self._buffer[0]) > 0:
--> 538 self._flush_buffer()
539 if self._record_batches_byte_size > 0:
540 self._write_batches(writer)
/usr/local/lib/python3.6/dist-packages/apache_beam/io/parquetio.py in _flush_buffer(self)
568 for x in arrays:
569 for b in x.buffers():
--> 570 size = size + b.size
571 self._record_batches_byte_size = self._record_batches_byte_size + size
AttributeError: 'NoneType' object has no attribute 'size' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles']
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/187/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/187/timeline | null | completed | null | null | false | [
"I have seen that somebody is hard working on easierly loadable wikipedia. #129 \r\nMaybe I should wait a few days for that version ?",
"Yes we (well @lhoestq) are very actively working on this."
] |
https://api.github.com/repos/huggingface/datasets/issues/2538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2538/comments | https://api.github.com/repos/huggingface/datasets/issues/2538/events | https://github.com/huggingface/datasets/issues/2538 | 927,940,691 | MDU6SXNzdWU5Mjc5NDA2OTE= | 2,538 | Loading partial dataset when debugging | [] | open | false | null | 11 | 2021-06-23T07:19:52Z | 2023-04-19T11:05:38Z | null | null | I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits).
Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues.
Is there a way to only load part of the dataset on load_dataset? This would really speed up my workflow.
Something like a debug mode would really help. Thanks! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2538/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2538/timeline | null | null | null | null | false | [
"Hi ! `load_dataset` downloads the full dataset once and caches it, so that subsequent calls to `load_dataset` just reloads the dataset from your disk.\r\nThen when you specify a `split` in `load_dataset`, it will just load the requested split from the disk. If your specified split is a sliced split (e.g. `\"train[:10]\"`), then it will load the 10 first rows of the train split that you have on disk.\r\n\r\nTherefore, as long as you don't delete your cache, all your calls to `load_dataset` will be very fast. Except the first call that downloads the dataset of course ^^",
"That’s a use case for the new streaming feature, no?",
"Hi @reachtarunhere.\r\n\r\nBesides the above insights provided by @lhoestq and @thomwolf, there is also a Dataset feature in progress (I plan to finish it this week): #2249, which will allow you, when calling `load_dataset`, to pass the option to download/preprocess/cache only some specific split(s), which will definitely speed up your workflow.\r\n\r\nIf this feature is interesting for you, I can ping you once it will be merged into the master branch.",
"Thanks all for responding.\r\n\r\nHey @albertvillanova \r\n\r\nThanks. Yes, I would be interested.\r\n\r\n@lhoestq I think even if a small split is specified it loads up the full dataset from the disk (please correct me if this is not the case). Because it does seem to be slow to me even on subsequent calls. There is no repeated downloading so it seems that the cache is working.\r\n\r\nI am not aware of the streaming feature @thomwolf mentioned. So I might need to read up on it.",
"@reshinthadithyan I use the .select function to have a fraction of indices.",
"If I want to create a dataset, containing only the 10 elements of a given dataset (slice it), how do I do that?",
"```python \r\nsmall_ds = ds.select(range(10))\r\n```",
"\r\n\r\n> ```python\r\n> small_ds = ds.select(range(10))\r\n> ```\r\n\r\nThanks, but this doesn't help me to save time during initial loading, right?",
"Indeed by default load_dataset would download and prepare everything as Arrow files. And passing `split=train[:10]` memory maps only the beginning of the full dataset that has been prepared on disk.\r\n\r\nIf you don't want to download everything, you can use streaming : \r\n```python \r\nids = load_dataset(..., streaming=True)\r\nfirst_samples = list(ids[\"train\"].take(10))\r\n```\r\n\r\nTo get a Dataset you can use \r\n```python \r\nds = Dataset.from_generator(ids.take(10).__iter__)\r\n```\r\n\r\nedit: fixed small bug",
"Thanks @lhoestq, but I don't think it is 100% accurate, as it doesn't keep the dataset structure exactly the same.\r\nTo load the full dataset, I do:\r\n```\r\ndata = load_dataset(\"json\", data_files=\"a.json\")\r\ntrain_data = data[\"train\"].shuffle()\r\n```\r\n\r\nBut when I am changing it as per your instructions: \r\n```\r\nids = load_dataset(\"json\", data_files=\"a.json\", streaming=True)\r\ndata = Dataset.from_generator(ids[\"train\"].take(1).__iter__)\r\ntrain_data = data[\"train\"].shuffle()\r\n```\r\nIt throws KeyError.\r\nI need a simple way, like you suggested, to have a subset of a Dataset, which exactly the same attributes.\r\n",
"Whoops I fixed my code sorry\r\n```diff\r\n- ds = Dataset.from_generator(ids[\"train\"].take(10).__iter__)\r\n+ ds = Dataset.from_generator(ids.take(10).__iter__)\r\n```\r\n\r\nin your case that means running\r\n```python\r\ntrain_data = data.shuffle()\r\n```\r\n\r\nwithout `[\"train\"]`"
] |
https://api.github.com/repos/huggingface/datasets/issues/680 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/680/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/680/comments | https://api.github.com/repos/huggingface/datasets/issues/680/events | https://github.com/huggingface/datasets/pull/680 | 710,066,138 | MDExOlB1bGxSZXF1ZXN0NDkzOTgyMjY4 | 680 | Fix bug related to boolean in GAP dataset. | [] | closed | false | null | 2 | 2020-09-28T08:39:39Z | 2020-09-29T15:54:47Z | 2020-09-29T15:54:47Z | null | ### Why I did
The value in `row["A-coref"]` and `row["B-coref"]` is `'TRUE'` or `'FALSE'`.
This type is `string`, then `bool('FALSE')` is equal to `True` in Python.
So, both rows are transformed into `True` now.
So, I modified this problem.
### What I did
I modified `bool(row["A-coref"])` and `bool(row["B-coref"])` to `row["A-coref"] == "TRUE"` and `row["B-coref"] == "TRUE"`.
Thank you! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/680/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/680/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/680.diff",
"html_url": "https://github.com/huggingface/datasets/pull/680",
"merged_at": "2020-09-29T15:54:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/680.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/680"
} | true | [
"Hi !\r\n\r\nGood catch, thanks for creating this PR :)\r\n\r\nCould you also regenerate the metadata for this dataset using \r\n```\r\ndatasets-cli test ./datasets/gap --save_infos --all_configs\r\n```\r\n\r\nThat'd be awesome",
"@lhoestq Thank you for your revieing!!!\r\n\r\nI've performed it and have read CONTRIBUTING.md now!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2482/comments | https://api.github.com/repos/huggingface/datasets/issues/2482/events | https://github.com/huggingface/datasets/pull/2482 | 918,846,027 | MDExOlB1bGxSZXF1ZXN0NjY4MjMyMzI5 | 2,482 | Allow to use tqdm>=4.50.0 | [] | closed | false | null | 0 | 2021-06-11T14:49:21Z | 2021-06-11T15:11:51Z | 2021-06-11T15:11:50Z | null | We used to have permission errors on windows whith the latest versions of tqdm (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/6365/workflows/24f7c960-3176-43a5-9652-7830a23a981e/jobs/39232))
They were due to open arrow files not properly closed by pyarrow.
Since https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 gc.collect is called each time we don't need an arrow file to make sure that the files are closed.
close https://github.com/huggingface/datasets/issues/2471
cc @lewtun | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2482/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2482/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2482.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2482",
"merged_at": "2021-06-11T15:11:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2482.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2482"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1295/comments | https://api.github.com/repos/huggingface/datasets/issues/1295/events | https://github.com/huggingface/datasets/pull/1295 | 759,375,251 | MDExOlB1bGxSZXF1ZXN0NTM0MzkxNzE1 | 1,295 | add hrenwac_para | [] | closed | false | null | 0 | 2020-12-08T11:40:06Z | 2020-12-11T17:42:20Z | 2020-12-11T17:42:20Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1295/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1295/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1295.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1295",
"merged_at": "2020-12-11T17:42:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1295.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1295"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/1982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1982/comments | https://api.github.com/repos/huggingface/datasets/issues/1982/events | https://github.com/huggingface/datasets/pull/1982 | 821,448,791 | MDExOlB1bGxSZXF1ZXN0NTg0MjM2NzQ0 | 1,982 | Fix NestedDataStructure.data for empty dict | [] | closed | false | null | 5 | 2021-03-03T20:16:51Z | 2021-03-04T16:46:04Z | 2021-03-03T22:48:36Z | null | Fix #1981 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1982/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1982/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1982.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1982",
"merged_at": "2021-03-03T22:48:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1982.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1982"
} | true | [
"I validated that this fixed the problem, thank you, @albertvillanova!\r\n",
"still facing the same issue or similar:\r\nfrom datasets import load_dataset\r\nwtm14_test = load_dataset('wmt14',\"de-en\",cache_dir='./datasets')\r\n\r\n~\\.cache\\huggingface\\modules\\datasets_modules\\datasets\\wmt14\\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\\wmt_utils.py in _split_generators(self, dl_manager)\r\n 758 # Extract manually downloaded files.\r\n 759 manual_files = dl_manager.extract(manual_paths_dict)\r\n--> 760 extraction_map = dict(downloaded_files, **manual_files)\r\n 761 \r\n 762 for language in self.config.language_pair:\r\n\r\nTypeError: type object argument after ** must be a mapping, not list",
"Hi @sabania \r\nWe released a patch version that fixes this issue (1.4.1), can you try with the new version please ?\r\n```\r\npip install --upgrade datasets\r\n```",
"I re-validated with the hotfix and the problem is no more.",
"It's working. thanks a lot."
] |
https://api.github.com/repos/huggingface/datasets/issues/545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/545/comments | https://api.github.com/repos/huggingface/datasets/issues/545/events | https://github.com/huggingface/datasets/issues/545 | 689,138,878 | MDU6SXNzdWU2ODkxMzg4Nzg= | 545 | New release coming up for this library | [] | closed | false | null | 1 | 2020-08-31T11:37:38Z | 2021-01-13T10:59:04Z | 2021-01-13T10:59:04Z | null | Hi all,
A few words on the roadmap for this library.
The next release will be a big one and is planed at the end of this week.
In addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will:
- have support for multi-modal datasets
- include various significant improvements on speed for standard processing (map, shuffling, ...)
- have a better support for metrics (better caching, and a robust API) and a bigger focus on reproductibility
- change the name to the final name (voted by the community): `datasets`
- be the 1.0.0 release as we think the API will be mostly stabilized from now on | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 4,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/545/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/545/timeline | null | completed | null | null | false | [
"Update: release is planed mid-next week."
] |
https://api.github.com/repos/huggingface/datasets/issues/4838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4838/comments | https://api.github.com/repos/huggingface/datasets/issues/4838/events | https://github.com/huggingface/datasets/pull/4838 | 1,337,194,918 | PR_kwDODunzps49F08R | 4,838 | Fix documentation card of adv_glue dataset | [] | closed | false | null | 2 | 2022-08-12T13:15:26Z | 2022-08-15T10:17:14Z | 2022-08-15T10:02:11Z | null | Fix documentation card of adv_glue dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4838/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4838/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4838.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4838",
"merged_at": "2022-08-15T10:02:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4838.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4838"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The failing test has nothing to do with this PR:\r\n```\r\nFAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/4477 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4477/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4477/comments | https://api.github.com/repos/huggingface/datasets/issues/4477/events | https://github.com/huggingface/datasets/issues/4477 | 1,268,308,986 | I_kwDODunzps5LmNv6 | 4,477 | Dataset Viewer issue for fgrezes/WIESP2022-NER | [] | closed | false | null | 2 | 2022-06-11T15:49:17Z | 2022-07-18T13:07:33Z | 2022-07-18T13:07:33Z | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4477/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4477/timeline | null | completed | null | null | false | [
"https://huggingface.co/datasets/fgrezes/WIESP2022-NER\r\n\r\nThe error:\r\n\r\n```\r\nMessage: Couldn't find a dataset script at /src/services/worker/fgrezes/WIESP2022-NER/WIESP2022-NER.py or any data file in the same directory. Couldn't find 'fgrezes/WIESP2022-NER' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**test*', '**eval*'] in dataset repository fgrezes/WIESP2022-NER with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']\r\n```\r\n\r\nI understand the issue is not related to the dataset viewer in itself, but with the autodetection of the data files without a loading script in the datasets library. cc @lhoestq @albertvillanova @mariosasko ",
"Apparently it finds `scoring-scripts/compute_seqeval.py` which matches `**eval*`, a regex that detects a test split. We should probably improve the regex because it's not supposed to catch this kind of files. It must also only check for files with supported extensions: txt, csv, png etc."
] |
https://api.github.com/repos/huggingface/datasets/issues/2400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2400/comments | https://api.github.com/repos/huggingface/datasets/issues/2400/events | https://github.com/huggingface/datasets/issues/2400 | 899,867,212 | MDU6SXNzdWU4OTk4NjcyMTI= | 2,400 | Concatenate several datasets with removed columns is not working. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-05-24T17:40:15Z | 2021-05-25T05:52:01Z | 2021-05-25T05:51:59Z | null | ## Describe the bug
You can't concatenate datasets when you removed columns before.
## Steps to reproduce the bug
```python
from datasets import load_dataset, concatenate_datasets
wikiann= load_dataset("wikiann","en")
wikiann["train"] = wikiann["train"].remove_columns(["langs","spans"])
wikiann["test"] = wikiann["test"].remove_columns(["langs","spans"])
assert wikiann["train"].features.type == wikiann["test"].features.type
concate = concatenate_datasets([wikiann["train"],wikiann["test"]])
```
## Expected results
Merged dataset
## Actual results
```python
ValueError: External features info don't match the dataset:
Got
{'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'ner_tags': Sequence(feature=ClassLabel(num_classes=7, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC'], names_file=None, id=None), length=-1, id=None), 'langs': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'spans': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
with type
struct<langs: list<item: string>, ner_tags: list<item: int64>, spans: list<item: string>, tokens: list<item: string>>
but expected something like
{'ner_tags': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
with type
struct<ner_tags: list<item: int64>, tokens: list<item: string>>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: ~1.6.2~ 1.5.0
- Platform: macos
- Python version: 3.8.5
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2400/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2400/timeline | null | completed | null | null | false | [
"Hi,\r\n\r\ndid you fill out the env info section manually or by copy-pasting the output of the `datasets-cli env` command?\r\n\r\nThis code should work without issues on 1.6.2 version (I'm working on master (1.6.2.dev0 version) and can't reproduce this error).",
"@mariosasko you are right I was still on `1.5.0`. "
] |
https://api.github.com/repos/huggingface/datasets/issues/4613 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4613/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4613/comments | https://api.github.com/repos/huggingface/datasets/issues/4613/events | https://github.com/huggingface/datasets/pull/4613 | 1,291,181,193 | PR_kwDODunzps46skd6 | 4,613 | Align/fix license metadata info | [] | closed | false | null | 3 | 2022-07-01T09:50:50Z | 2022-07-01T12:53:57Z | 2022-07-01T12:42:47Z | null | fix bad "other-*" licenses and add the corresponding "license_details" when relevant | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4613/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4613/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4613.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4613",
"merged_at": "2022-07-01T12:42:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4613.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4613"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you thank you! Let's merge and pray? 😱 ",
"I just need to add `license_details` to the validator and yup we can merge"
] |
https://api.github.com/repos/huggingface/datasets/issues/1750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1750/comments | https://api.github.com/repos/huggingface/datasets/issues/1750/events | https://github.com/huggingface/datasets/pull/1750 | 788,668,085 | MDExOlB1bGxSZXF1ZXN0NTU3MTM1MzM1 | 1,750 | Fix typo in README.md of cnn_dailymail | [] | closed | false | null | 2 | 2021-01-19T03:06:05Z | 2021-01-19T11:07:29Z | 2021-01-19T09:48:43Z | null | When I read the README.md of `CNN/DailyMail Dataset`, there seems to be a typo `CCN`.
I am afraid this is a trivial matter, but I would like to make a suggestion for revision. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1750/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1750/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1750.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1750",
"merged_at": "2021-01-19T09:48:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1750.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1750"
} | true | [
"Good catch, thanks!",
"Thank you for merging!"
] |
https://api.github.com/repos/huggingface/datasets/issues/1146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1146/comments | https://api.github.com/repos/huggingface/datasets/issues/1146/events | https://github.com/huggingface/datasets/pull/1146 | 757,498,565 | MDExOlB1bGxSZXF1ZXN0NTMyODY1NTAy | 1,146 | Add LINNAEUS | [] | closed | false | null | 0 | 2020-12-05T01:01:09Z | 2020-12-05T16:35:53Z | 2020-12-05T16:35:53Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1146/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1146/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1146.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1146",
"merged_at": "2020-12-05T16:35:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1146.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1146"
} | true | [] |
|
https://api.github.com/repos/huggingface/datasets/issues/2359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2359/comments | https://api.github.com/repos/huggingface/datasets/issues/2359/events | https://github.com/huggingface/datasets/issues/2359 | 891,946,017 | MDU6SXNzdWU4OTE5NDYwMTc= | 2,359 | Allow model labels to be passed during task preparation | [] | closed | false | null | 1 | 2021-05-14T13:58:28Z | 2022-10-05T17:37:22Z | 2022-10-05T17:37:22Z | null | Models have a config with label2id. And we have the same for datasets with the ClassLabel feature type. At one point either the model or the dataset must sync with the other. It would be great to do that on the dataset side.
For example for sentiment classification on amazon reviews with you could have these labels:
- "1 star", "2 stars", "3 stars", "4 stars", "5 stars"
- "1", "2", "3", "4", "5"
Some models may use the first set, while other models use the second set.
Here in the `TextClassification` class, the user can only specify one set of labels, while many models could actually be compatible but have different sets of labels. Should we allow users to pass a list of compatible labels sets ?
Then in terms of API, users could use `dataset.prepare_for_task("text-classification", labels=model.labels)` or something like that.
The label set could also be the same but not in the same order. For NLI for example, some models use `["neutral", "entailment", "contradiction"]` and some others use `["neutral", "contradiction", "entailment"]`, so we should take care of updating the order of the labels in the dataset to match the labels order of the model.
Let me know what you think ! This can be done in a future PR
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/pull/2255#discussion_r632412792_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2359/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2359/timeline | null | completed | null | null | false | [
"We now have the `align_labels_with_mapping` method in the API for this purpose."
] |
https://api.github.com/repos/huggingface/datasets/issues/4873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4873/comments | https://api.github.com/repos/huggingface/datasets/issues/4873/events | https://github.com/huggingface/datasets/issues/4873 | 1,347,592,022 | I_kwDODunzps5QUp9W | 4,873 | Multiple dataloader memory error | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 3 | 2022-08-23T08:59:50Z | 2023-01-26T02:01:11Z | null | null | For the use of multiple datasets and tasks, we use around more than 200+ dataloaders, then pass it into `dataloader1, dataloader2, ..., dataloader200=accelerate.prepare(dataloader1, dataloader2, ..., dataloader200)`
It causes the memory error when generating batches. Any solutions to it?
```bash
File "/home/xxx/my_code/src/utils/data_utils.py", line 54, in generate_batch
x = next(iterator)
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 301, in __iter__
for batch in super().__iter__():
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__
data = self._next_data()
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 28, in fetch
data.append(next(self.dataset_iter))
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/accelerate/data_loader.py", line 249, in __iter__
for element in self.dataset:
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 503, in __iter__
for key, example in self._iter():
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 500, in _iter
yield from ex_iterable
File "/home/xxx/anaconda3/envs/pt1.7/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 231, in __iter__
new_key = "_".join(str(key) for key in keys)
MemoryError
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4873/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4873/timeline | null | null | null | null | false | [
"Hi!\r\n\r\n200+ data loaders is a lot. Have you tried to reduce the number of datasets by concatenating/interleaving the ones with the same structure/task (the API is `{concatenate_datasets/interleave_datasets}([dset1, ..., dset_N])`)?",
"Hi @mariosasko, thank you for your reply. I tried pre-concatenating different datasets into one, but one key need is to keep each batch the same data type. Considering that the concatenate-then-segment operation for prefetched samples may span across different data types after concatenating/interleaving (cuz different data sources are mixed), any solution to remain the same data source for each batch?",
"@cyk1337 have you found any solutions to it?\r\n@mariosasko I tried with interleave_datasets to sample batches from two large datasets (wikipedia alike) and it results in out-of-memory error during data loading (16gpus, >1TB physical memory). Do you have any idea about it?"
] |
https://api.github.com/repos/huggingface/datasets/issues/5692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5692/comments | https://api.github.com/repos/huggingface/datasets/issues/5692/events | https://github.com/huggingface/datasets/issues/5692 | 1,649,818,644 | I_kwDODunzps5iVjwU | 5,692 | pyarrow.lib.ArrowInvalid: Unable to merge: Field <field> has incompatible types | [] | open | false | null | 2 | 2023-03-31T18:19:40Z | 2023-04-04T14:38:30Z | null | null | ### Describe the bug
When loading the dataset [wikianc-en](https://huggingface.co/datasets/cyanic-selkie/wikianc-en) which I created using [this](https://github.com/cyanic-selkie/wikianc) code, I get the following error:
```
Traceback (most recent call last):
File "/home/sven/code/rector/answer-detection/train.py", line 106, in <module>
(dataset, weights) = get_dataset(args.dataset, tokenizer, labels, args.padding)
File "/home/sven/code/rector/answer-detection/dataset.py", line 106, in get_dataset
dataset = load_dataset("cyanic-selkie/wikianc-en")
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/load.py", line 1794, in load_dataset
ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1106, in as_dataset
datasets = map_nested(
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 443, in map_nested
mapped = [
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 444, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 346, in _single_map_nested
return function(data_struct)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1136, in _build_single_dataset
ds = self._as_dataset(
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/builder.py", line 1207, in _as_dataset
dataset_kwargs = ArrowReader(cache_dir, self.info).read(
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 239, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 260, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/arrow_reader.py", line 203, in _read_files
pa_table = concat_tables(pa_tables) if len(pa_tables) != 1 else pa_tables[0]
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1808, in concat_tables
return ConcatenationTable.from_tables(tables, axis=axis)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1514, in from_tables
return cls.from_blocks(blocks)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1427, in from_blocks
table = cls._concat_blocks(blocks, axis=0)
File "/home/sven/.cache/pypoetry/virtualenvs/rector-Z2mdKRnn-py3.10/lib/python3.10/site-packages/datasets/table.py", line 1373, in _concat_blocks
return pa.concat_tables(pa_tables, promote=True)
File "pyarrow/table.pxi", line 5224, in pyarrow.lib.concat_tables
File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Unable to merge: Field paragraph_anchors has incompatible types: list<: struct<start: uint32 not null, end: uint32 not null, qid: uint32, pageid: uint32, title: string not null> not null> vs list<item: struct<start: uint32, end: uint32, qid: uint32, pageid: uint32, title: string>>
```
This only happens when I load the `train` split, indicating that the size of the dataset is the deciding factor.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("cyanic-selkie/wikianc-en", split="train")
```
### Expected behavior
The dataset should load normally without any errors.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-6.2.8-arch1-1-x86_64-with-glibc2.37
- Python version: 3.10.10
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5692/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5692/timeline | null | null | null | null | false | [
"Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?",
"> Hi! The link pointing to the code that generated the dataset is broken. Can you please fix it to make debugging easier?\r\n\r\nSorry about that, it's fixed now.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/495/comments | https://api.github.com/repos/huggingface/datasets/issues/495/events | https://github.com/huggingface/datasets/pull/495 | 676,959,289 | MDExOlB1bGxSZXF1ZXN0NDY2MTY5MTA3 | 495 | stack vectors in pytorch and tensorflow | [] | closed | false | null | 0 | 2020-08-11T15:12:53Z | 2020-08-12T09:30:49Z | 2020-08-12T09:30:48Z | null | When the format of a dataset is set to pytorch or tensorflow, and if the dataset has vectors in it, they were not stacked together as tensors when calling `dataset[i:i + batch_size][column]` or `dataset[column]`.
I added support for stacked tensors for both pytorch and tensorflow.
For ragged tensors, they are stacked only for tensorflow as pytorch doesn't support ragged tensors.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/495/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/495/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/495.diff",
"html_url": "https://github.com/huggingface/datasets/pull/495",
"merged_at": "2020-08-12T09:30:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/495.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/495"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/400/comments | https://api.github.com/repos/huggingface/datasets/issues/400/events | https://github.com/huggingface/datasets/pull/400 | 657,975,600 | MDExOlB1bGxSZXF1ZXN0NDUwMDA1MDU5 | 400 | Web questions | [] | closed | false | null | 0 | 2020-07-16T08:28:29Z | 2020-07-16T08:50:51Z | 2020-07-16T08:42:54Z | null | add the WebQuestion dataset
#336 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/400/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/400/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/400.diff",
"html_url": "https://github.com/huggingface/datasets/pull/400",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/400.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/400"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2152/comments | https://api.github.com/repos/huggingface/datasets/issues/2152/events | https://github.com/huggingface/datasets/pull/2152 | 845,751,273 | MDExOlB1bGxSZXF1ZXN0NjA0ODk0MDkz | 2,152 | Update README.md | [] | closed | false | null | 0 | 2021-03-31T03:21:19Z | 2021-04-01T10:20:37Z | 2021-04-01T10:20:36Z | null | Updated some descriptions of Wino_Bias dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2152/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2152/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2152.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2152",
"merged_at": "2021-04-01T10:20:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2152.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2152"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/913/comments | https://api.github.com/repos/huggingface/datasets/issues/913/events | https://github.com/huggingface/datasets/pull/913 | 752,892,020 | MDExOlB1bGxSZXF1ZXN0NTI5MDkyOTc3 | 913 | My new dataset PEC | [] | closed | false | null | 6 | 2020-11-29T11:10:37Z | 2020-12-01T10:41:53Z | 2020-12-01T10:41:53Z | null | A new dataset PEC published in EMNLP 2020. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/913/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/913/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/913.diff",
"html_url": "https://github.com/huggingface/datasets/pull/913",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/913.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/913"
} | true | [
"How to resolve these failed checks?",
"Thanks for adding this one :) \r\n\r\nTo fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\nTo fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\nFor example : `encoding=\"utf-8\"`\r\nTo fix the test_load_dataset_pec , you must add the dummy_data.zip file. It is used to test the dataset script and make sure it runs fine. To add it, please refer to the steps in https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-add-a-dataset\r\n\r\n",
"Could you also add a dataset card ? you can find a template here : https://github.com/huggingface/datasets/blob/master/templates/README.md\r\n\r\nThat would be awesome",
"> Thanks for adding this one :)\r\n> \r\n> To fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\n> To fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\n> For example : `encoding=\"utf-8\"`\r\n> To fix the test_load_dataset_pec , you must add the dummy_data.zip file. It is used to test the dataset script and make sure it runs fine. To add it, please refer to the steps in https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-add-a-dataset\r\n\r\nThank you for the detailed suggestion.\r\n\r\nI have added dummy_data but it still failed the DistributedDatasetTest check. My dataset has a central file (containing a python dict) that needs to be accessed by each data example. Is it because the central file cannot be distributed (which would lead to a partial dictionary)?\r\n\r\nSpecifically, the central file contains a dictionary of speakers with their attributes. Each data example is also associated with a speaker. As of now, I keep the central file and data files separately. If I remove the central file by appending the speaker attributes to each data example, then there would be lots of redundancy because there are lots of duplicate speakers in the data files.",
"The `DistributedDatasetTest` fail and the changes of this PR are not related, there was just a bug in the CI. You can ignore it",
"> Really cool thanks !\r\n> \r\n> Could you make the dummy files smaller ? For example by reducing the size of persona.txt ?\r\n> I also left a comment about the files concatenation. It would be cool to replace that with simple iterations through the different files.\r\n> \r\n> Then once this is done, you can add a dataset card using the template guide here : https://github.com/huggingface/datasets/blob/master/templates/README_guide.md\r\n> If some fields can't be filled, just leave `[N/A]`\r\n\r\nSmall change: if you don't have the information for a field, please leave `[More Information Needed]` rather than `[N/A]`\r\n\r\nThe full information can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)"
] |
https://api.github.com/repos/huggingface/datasets/issues/3452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3452/comments | https://api.github.com/repos/huggingface/datasets/issues/3452/events | https://github.com/huggingface/datasets/issues/3452 | 1,083,803,178 | I_kwDODunzps5AmYYq | 3,452 | why the stratify option is omitted from test_train_split function? | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues",
"id": 3761482852,
"name": "good second issue",
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue"
}
] | closed | false | null | 4 | 2021-12-18T10:37:47Z | 2022-05-25T20:43:51Z | 2022-05-25T20:43:51Z | null | why the stratify option is omitted from test_train_split function?
is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset. | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3452/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3452/timeline | null | completed | null | null | false | [
"Hi ! It's simply not added yet :)\r\n\r\nIf someone wants to contribute to add the `stratify` parameter I'd be happy to give some pointers.\r\n\r\nIn the meantime, I guess you can use `sklearn` or other tools to do a stratified train/test split over the **indices** of your dataset and then do\r\n```\r\ntrain_dataset = dataset.select(train_indices)\r\ntest_dataset = dataset.select(test_indices)\r\n```",
"Hi @lhoestq I would like to add `stratify` parameter, can you give me some pointers for adding the same ?",
"Hi ! Sure :)\r\n\r\nThe `train_test_split` method is defined here: \r\n\r\nhttps://github.com/huggingface/datasets/blob/dc62232fa1b3bcfe2fbddcb721f2d141f8908943/src/datasets/arrow_dataset.py#L3253-L3253\r\n\r\nand inside `train_test_split ` we need to create the right `train_indices` and `test_indices` that are passed here to `.select()`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/dc62232fa1b3bcfe2fbddcb721f2d141f8908943/src/datasets/arrow_dataset.py#L3450-L3464\r\n\r\nFor example if your dataset is like\r\n| | label |\r\n|---:|--------:|\r\n| 0 | 1 |\r\n| 1 | 1 |\r\n| 2 | 0 |\r\n| 3 | 0 |\r\n\r\nand the user passes `stratify=dataset[\"label\"]`, then you should get indices that look like this\r\n```\r\ntrain_indices = [0, 2]\r\ntest_indices = [1, 3]\r\n```\r\n\r\nthese indices will be passed to `.select` to return the stratified train and test splits :)\r\n\r\nFeel free to îng me if you have any question !",
"@lhoestq \r\nI just added the implementation for `stratify` option here #4322 "
] |
https://api.github.com/repos/huggingface/datasets/issues/5766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5766/comments | https://api.github.com/repos/huggingface/datasets/issues/5766/events | https://github.com/huggingface/datasets/issues/5766 | 1,671,485,882 | I_kwDODunzps5joNm6 | 5,766 | Support custom feature types | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 3 | 2023-04-17T15:46:41Z | 2023-05-03T21:58:43Z | null | null | ### Feature request
I think it would be nice to allow registering custom feature types with the 🤗 Datasets library. For example, allow to do something along the following lines:
```
from datasets.features import register_feature_type # this would be a new function
@register_feature_type
class CustomFeatureType:
def encode_example(self, value):
"""User-provided logic to encode an example of this feature."""
pass
def decode_example(self, value, token_per_repo_id=None):
"""User-provided logic to decode an example of this feature."""
pass
```
### Motivation
Users of 🤗 Datasets, such as myself, may want to use the library to load datasets with unsupported feature types (i.e., beyond `ClassLabel`, `Image`, or `Audio`). This would be useful for prototyping new feature types and for feature types that aren't used widely enough to warrant inclusion in 🤗 Datasets.
At the moment, this is only possible by monkey-patching 🤗 Datasets, which obfuscates the code and is prone to breaking with library updates. It also requires the user to write some custom code which could be easily avoided.
### Your contribution
I would be happy to contribute this feature. My proposed solution would involve changing the following call to `globals()` to an explicit feature type registry, which a user-facing `register_feature_type` decorator could update.
https://github.com/huggingface/datasets/blob/fd893098627230cc734f6009ad04cf885c979ac4/src/datasets/features/features.py#L1329
I would also provide an abstract base class for custom feature types which users could inherit. This would have at least an `encode_example` method and a `decode_example` method, similar to `Image` or `Audio`.
The existing `encode_nested_example` and `decode_nested_example` functions would also need to be updated to correctly call the corresponding functions for the new type. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5766/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5766/timeline | null | null | null | null | false | [
"Hi ! Interesting :) What kind of new types would you like to use ?\r\n\r\nNote that you can already implement your own decoding by using `set_transform` that can decode data on-the-fly when rows are accessed",
"An interesting proposal indeed. \r\n\r\nPandas and Polars have the \"extension API\", so doing something similar on our side could be useful, too. However, this requires defining a common interface for the existing feature types before discussing the API/workflow for defining/sharing custom feature types, and this could take some time.\r\n\r\nIt would also be nice if the datasets viewer could render these custom types.",
"Thank you for your replies! @lhoestq I have a use case involving whole-slide images in digital pathology. These are very large images (potentially gigapixel scale), so standard image tools are not suitable. Essentially, encoding/decoding can be done from/to [`OpenSlide`](https://openslide.org/api/python/) objects. Though there may be interest in this use case from the digital pathology community, it may not be sufficiently useful to suggest adding the feature type, but there will likely be many other use cases for a generic custom feature type.\r\n\r\nThank you for pointing out `set_transform`! I will make sure to keep this in mind in the future.\r\n\r\n@mariosasko An \"extension API\" sounds like a good idea, though I understand that this needs to be properly defined, and that you will need to discuss it internally. Support from the viewer would be awesome, too, though the generalization to arbitrary types sounds challenging.\r\n\r\nFor now, happy to know that you're considering the feature. Feel free to let me know if I can do anything to support the process."
] |
https://api.github.com/repos/huggingface/datasets/issues/203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/203/comments | https://api.github.com/repos/huggingface/datasets/issues/203/events | https://github.com/huggingface/datasets/pull/203 | 625,515,488 | MDExOlB1bGxSZXF1ZXN0NDIzNzEyMTQ3 | 203 | Raise an error if no config name for datasets like glue | [] | closed | false | null | 0 | 2020-05-27T09:03:58Z | 2020-05-27T16:40:39Z | 2020-05-27T16:40:38Z | null | Some datasets like glue (see #130) and scientific_papers (see #197) have many configs.
For example for glue there are cola, sst2, mrpc etc.
Currently if a user does `load_dataset('glue')`, then Cola is loaded by default and it can be confusing. Instead, we should raise an error to let the user know that he has to pick one of the available configs (as proposed in #152). For example for glue, the message looks like:
```
ValueError: Config name is missing.
Please pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']
Example of usage:
`load_dataset('glue', 'cola')`
```
The error is raised if the config name is missing and if there are >=2 possible configs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/203/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/203/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/203.diff",
"html_url": "https://github.com/huggingface/datasets/pull/203",
"merged_at": "2020-05-27T16:40:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/203.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/203"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2542/comments | https://api.github.com/repos/huggingface/datasets/issues/2542/events | https://github.com/huggingface/datasets/issues/2542 | 928,540,382 | MDU6SXNzdWU5Mjg1NDAzODI= | 2,542 | `datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2021-06-23T18:41:16Z | 2021-06-25T21:50:05Z | 2021-06-24T14:57:08Z | null | ## Describe the bug
Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("drop")
load_dataset("adversarial_qa", "adversarialQA")
```
## Expected results
The examples keys should be unique.
## Actual results
```bash
>>> load_dataset("drop")
Using custom data configuration default
Downloading and preparing dataset drop/default (download: 7.92 MiB, generated: 111.88 MiB, post-processed: Unknown size, total: 119.80 MiB) to /home/hf/.cache/huggingface/datasets/drop/default/0.1.0/7a94f1e2bb26c4b5c75f89857c06982967d7416e5af935a9374b9bccf5068026...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset
use_auth_token=use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 992, in _prepare_split
num_examples, num_bytes = writer.finalize()
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/arrow_writer.py", line 409, in finalize
self.check_duplicate_keys()
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys
raise DuplicatedKeysError(key)
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: 28553293-d719-441b-8f00-ce3dc6df5398
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.7.0
- Platform: Linux-5.4.0-1044-gcp-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2542/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2542/timeline | null | completed | null | null | false | [
"very much related: https://github.com/huggingface/datasets/pull/2333",
"Hi @VictorSanh, thank you for reporting this issue with duplicated keys.\r\n\r\n- The issue with \"adversarial_qa\" was fixed 23 days ago: #2433. Current version of `datasets` (1.8.0) includes the patch.\r\n- I am investigating the issue with `drop`. I'll ping you to keep you informed.",
"Hi @VictorSanh, the issue is already fixed and merged into master branch and will be included in our next release version 1.9.0.",
"thank you!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5532/comments | https://api.github.com/repos/huggingface/datasets/issues/5532/events | https://github.com/huggingface/datasets/issues/5532 | 1,584,505,128 | I_kwDODunzps5ecaEo | 5,532 | train_test_split in arrow_dataset does not ensure to keep single classes in test set | [] | closed | false | null | 1 | 2023-02-14T16:52:29Z | 2023-02-15T16:09:19Z | 2023-02-15T16:09:19Z | null | ### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
data = [
{'label': 0, 'text': "example1"},
{'label': 1, 'text': "example2"},
{'label': 1, 'text': "example3"},
{'label': 1, 'text': "example4"},
{'label': 0, 'text': "example5"},
{'label': 1, 'text': "example6"},
{'label': 2, 'text': "example7"},
{'label': 2, 'text': "example8"}
]
for _ in range(10):
data_set = Dataset.from_list(data)
data_set = data_set.train_test_split(test_size=0.5)
data_set["train"]
unique_labels_train = np.unique(data_set["train"][:]["label"])
unique_labels_test = np.unique(data_set["test"][:]["label"])
assert len(unique_labels_train) >= len(unique_labels_test)
```
### Expected behavior
I expect to have every available class at least once in my training set.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5532/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5532/timeline | null | completed | null | null | false | [
"Hi! You can get this behavior by specifying `stratify_by_column=\"label\"` in `train_test_split`.\r\n\r\nThis is the full example:\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, ClassLabel\r\n\r\ndata = [\r\n {'label': 0, 'text': \"example1\"},\r\n {'label': 1, 'text': \"example2\"},\r\n {'label': 1, 'text': \"example3\"},\r\n {'label': 1, 'text': \"example4\"},\r\n {'label': 0, 'text': \"example5\"},\r\n {'label': 1, 'text': \"example6\"},\r\n {'label': 2, 'text': \"example7\"},\r\n {'label': 2, 'text': \"example8\"}\r\n]\r\n\r\nfor _ in range(10):\r\n data_set = Dataset.from_list(data)\r\n data_set = data_set.cast_column(\"label\", ClassLabel(num_classes=3))\r\n data_set = data_set.train_test_split(test_size=0.5, stratify_by_column=\"label\")\r\n unique_labels_train = np.unique(data_set[\"train\"][:][\"label\"])\r\n unique_labels_test = np.unique(data_set[\"test\"][:][\"label\"])\r\n assert len(unique_labels_train) >= len(unique_labels_test) \r\n```\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1806/comments | https://api.github.com/repos/huggingface/datasets/issues/1806/events | https://github.com/huggingface/datasets/pull/1806 | 798,607,869 | MDExOlB1bGxSZXF1ZXN0NTY1Mzk0ODIz | 1,806 | Update details to MLSUM dataset | [] | closed | false | null | 1 | 2021-02-01T18:35:12Z | 2021-02-01T18:46:28Z | 2021-02-01T18:46:21Z | null | Update details to MLSUM dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1806/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1806/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1806.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1806",
"merged_at": "2021-02-01T18:46:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1806.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1806"
} | true | [
"Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4075/comments | https://api.github.com/repos/huggingface/datasets/issues/4075/events | https://github.com/huggingface/datasets/issues/4075 | 1,188,462,162 | I_kwDODunzps5G1n5S | 4,075 | Add CCAgT dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",
"default": false,
"description": "Vision datasets",
"id": 3608941089,
"name": "vision",
"node_id": "LA_kwDODunzps7XHBIh",
"url": "https://api.github.com/repos/huggingface/datasets/labels/vision"
}
] | closed | false | null | 4 | 2022-03-31T18:20:28Z | 2022-07-06T19:03:42Z | 2022-07-06T19:03:42Z | null | ## Adding a Dataset
- **Name:** CCAgT dataset: Images of Cervical Cells with AgNOR Stain Technique
- **Description:** The dataset contains 2540 images (1600x1200 where each pixel is 0.111μm×0.111μm) from three different slides, having at least one nucleus per image. These images are from fields belonging to a sample cervical slide, colored with silver-stained, a method known as Argyrophilic Nucleolar Organizer Regions (AgNOR).
- **Paper:** https://doi.org/10.1109/cbms49503.2020.00110
- **Data:** https://arquivos.ufsc.br/d/373be2177a33426a9e6c/ or https://drive.google.com/drive/u/4/folders/1TBpYCv6S1ydASLauSzcsvO7Wc5O-WUw0
- **Motivation:** This is a unique dataset (because of the stain), for a major health problem, cervical cancer, with real data.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Hi, this is a public version of the dataset that I have been working on, soon we will have another version of this dataset. But until this new version goes out, I thought I would add this dataset here, if it makes sense for the repository. You can assign the task to me if possible | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4075/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4075/timeline | null | completed | null | null | false | [
"Awesome ! Let us know if you have questions or if we can help ;) I'm assigning you\r\n\r\nPS: if possible, please try to not use Google Drive links in your dataset script, since Google Drive has download quotas and is not always reliable.",
"HI, I was waiting to come out in the second version to do the implementation.\r\n\r\n- Paper: https://dx.doi.org/10.2139/ssrn.4126881\r\n- Data: [Data mendelay](http://doi.org/10.17632/wg4bpm33hj.2)",
"Nice ! 🚀 ",
"The link of CCAgT dataset is: https://huggingface.co/datasets/lapix/CCAgT"
] |
https://api.github.com/repos/huggingface/datasets/issues/4096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4096/comments | https://api.github.com/repos/huggingface/datasets/issues/4096/events | https://github.com/huggingface/datasets/issues/4096 | 1,193,165,229 | I_kwDODunzps5HHkGt | 4,096 | Add support for streaming Zarr stores for hosted datasets | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 9 | 2022-04-05T13:38:32Z | 2022-04-25T08:04:12Z | 2022-04-21T08:12:58Z | null | **Is your feature request related to a problem? Please describe.**
Lots of geospatial data is stored in the Zarr format. This format works well for n-dimensional data and coordinates, and can have good compression. Unfortunately, HF datasets doesn't support streaming in data in Zarr format as far as I can tell. Zarr stores are designed to be easily streamed in from cloud storage, especially with xarray and fsspec. Since geospatial data tends to be very large, and on the order of TBs of data or 10's of TBs of data for a single dataset, it can be difficult to store the dataset locally for users. Just adding Zarr stores with HF git doesn't work well (see https://github.com/huggingface/datasets/issues/3823) as Zarr splits the data into lots of small chunks for fast loading, and that doesn't work well with git. I've somewhat gotten around that issue by tarring each Zarr store and uploading them as a single file, which seems to be working (see https://huggingface.co/datasets/openclimatefix/gfs-reforecast for example data files, although the script isn't written yet). This does mean that streaming doesn't quite work though. On the other hand, in https://huggingface.co/datasets/openclimatefix/eumetsat_uk_hrv we stream in a Zarr store from a public GCP bucket quite easily.
**Describe the solution you'd like**
A way to upload Zarr stores for hosted datasets so that we can stream it with xarray and fsspec.
**Describe alternatives you've considered**
Tarring each Zarr store individually and just extracting them in the dataset script -> Downside this is a lot of data that probably doesn't fit locally for a lot of potential users.
Pre-prepare examples in a format like Parquet -> Would use a lot more storage, and a lot less flexibility, in the eumetsat_uk_hrv, we use the one Zarr store for multiple different configurations.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4096/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4096/timeline | null | completed | null | null | false | [
"Hi @jacobbieker, thanks for your request and study of possible alternatives.\r\n\r\nWe are very interested in finding a way to make `datasets` useful to you.\r\n\r\nLooking at the Zarr docs, I saw that among its storage alternatives, there is the ZIP file format: https://zarr.readthedocs.io/en/stable/api/storage.html#zarr.storage.ZipStore\r\n\r\nThis might be convenient for many reasons:\r\n- On the one hand, we avoid the Git issue with huge number of small files: chunks files are compressed into a single ZIP file\r\n- On the other hand, the ZIP file format is specially suited for streaming data because it allows random access to its component files (i.e. it supports random access to its chunks)\r\n\r\nAnyway, I think that a Python loading script will be necessary: you need to implement additional logic to select certain chunks (based on date or other criteria).\r\n\r\nPlease, let me know if this makes sense to you.",
"Ah okay, I missed the option of zip files for zarr, I'll try that with our repos and see if it works! Thanks a lot!",
"Hi @jacobbieker, does the Zarr ZipStore work for your use case?",
"Hi,\r\n\r\nYes, it seems to! I got it working for https://huggingface.co/datasets/openclimatefix/mrms thanks for the help! ",
"On behalf of the Zarr developers, let me say THANK YOU for working to support Zarr on HF! 🙏 Zarr is a 100% open-source and community driven project (fiscally sponsored by NumFocus). We see it as an ideal format for ML training datasets, particularly in scientific domains.\r\n\r\nI think the solution of zipping the Zarr store is a reasonable way to balance the constraints of Git LFS with the structure of Zarr.\r\n\r\nIt would be amazing to get something on the [Hugging Face Datasets Docs](https://huggingface.co/docs/datasets/index) about how to best work with Zarr. Let me know if there's a way I could help with that effort.",
"Also just noting here that I was able to lazily open @jacobbieker's dataset over the internet from HF hub 🚀 !\r\n\r\n```python\r\nimport xarray as xr\r\nurl = \"https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip\"\r\nzip_url = 'zip:///::' + url\r\nds = xr.open_dataset(zip_url, engine='zarr', chunks={})\r\n```\r\n\r\n<img width=\"740\" alt=\"image\" src=\"https://user-images.githubusercontent.com/1197350/164508663-bc75cdc0-734d-44f4-9562-2877ecfdf433.png\">\r\n",
"However, I wasn't able to get streaming working using the Datasets api:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset(\"openclimatefix/mrms\", streaming=True, split='train')\r\nitem = next(iter(ds))\r\n```\r\n\r\n<details>\r\n<summary>FileNotFoundError traceback</summary>\r\n\r\n```\r\nNo config specified, defaulting to: mrms/2021\r\nzip://::https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip\r\ndata/2016_001.zarr.zip\r\nzip://2016_001.zarr.zip::https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip\r\n---------------------------------------------------------------------------\r\nFileNotFoundError Traceback (most recent call last)\r\nInput In [1], in <cell line: 3>()\r\n 1 from datasets import load_dataset\r\n 2 ds = load_dataset(\"openclimatefix/mrms\", streaming=True, split='train')\r\n----> 3 item = next(iter(ds))\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/datasets/iterable_dataset.py:497, in IterableDataset.__iter__(self)\r\n 496 def __iter__(self):\r\n--> 497 for key, example in self._iter():\r\n 498 if self.features:\r\n 499 # we encode the example for ClassLabel feature types for example\r\n 500 encoded_example = self.features.encode_example(example)\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/datasets/iterable_dataset.py:494, in IterableDataset._iter(self)\r\n 492 else:\r\n 493 ex_iterable = self._ex_iterable\r\n--> 494 yield from ex_iterable\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/datasets/iterable_dataset.py:87, in ExamplesIterable.__iter__(self)\r\n 86 def __iter__(self):\r\n---> 87 yield from self.generate_examples_fn(**self.kwargs)\r\n\r\nFile ~/.cache/huggingface/modules/datasets_modules/datasets/openclimatefix--mrms/2a6f697014d7eb3caf586ca137d47ca38785ae2fe36248611b021f8248b59936/mrms.py:150, in MRMS._generate_examples(self, filepath, split)\r\n 147 filepath = \"[https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip](https://huggingface.co/datasets/openclimatefix/mrms/resolve/main/data/2016_001.zarr.zip%3C/span%3E%3Cspan) style=\"color:rgb(175,0,0)\">\"\r\n 148 # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.\r\n 149 # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.\r\n--> 150 with zarr.storage.FSStore(fsspec.open(\"zip::\" + filepath, mode='r'), mode='r') as store:\r\n 151 data = xr.open_zarr(store)\r\n 152 for key, row in enumerate(data[\"time\"].values):\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/zarr/storage.py:1120, in FSStore.__init__(self, url, normalize_keys, key_separator, mode, exceptions, dimension_separator, **storage_options)\r\n 1117 import fsspec\r\n 1118 self.normalize_keys = normalize_keys\r\n-> 1120 protocol, _ = fsspec.core.split_protocol(url)\r\n 1121 # set auto_mkdir to True for local file system\r\n 1122 if protocol in (None, \"file\") and not storage_options.get(\"auto_mkdir\"):\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:514, in split_protocol(urlpath)\r\n 512 def split_protocol(urlpath):\r\n 513 \"\"\"Return protocol, path pair\"\"\"\r\n--> 514 urlpath = stringify_path(urlpath)\r\n 515 if \"://\" in urlpath:\r\n 516 protocol, path = urlpath.split(\"://\", 1)\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/utils.py:315, in stringify_path(filepath)\r\n 313 return filepath\r\n 314 elif hasattr(filepath, \"__fspath__\"):\r\n--> 315 return filepath.__fspath__()\r\n 316 elif isinstance(filepath, pathlib.Path):\r\n 317 return str(filepath)\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:98, in OpenFile.__fspath__(self)\r\n 96 def __fspath__(self):\r\n 97 # may raise if cannot be resolved to local file\r\n---> 98 return self.open().__fspath__()\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:140, in OpenFile.open(self)\r\n 132 def open(self):\r\n 133 \"\"\"Materialise this as a real open file without context\r\n 134 \r\n 135 The file should be explicitly closed to avoid enclosed file\r\n (...)\r\n 138 been deleted; but a with-context is better style.\r\n 139 \"\"\"\r\n--> 140 out = self.__enter__()\r\n 141 closer = out.close\r\n 142 fobjects = self.fobjects.copy()[:-1]\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/core.py:103, in OpenFile.__enter__(self)\r\n 100 def __enter__(self):\r\n 101 mode = self.mode.replace(\"t\", \"\").replace(\"b\", \"\") + \"b\"\r\n--> 103 f = self.fs.open(self.path, mode=mode)\r\n 105 self.fobjects = [f]\r\n 107 if self.compression is not None:\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/spec.py:1009, in AbstractFileSystem.open(self, path, mode, block_size, cache_options, compression, **kwargs)\r\n 1007 else:\r\n 1008 ac = kwargs.pop(\"autocommit\", not self._intrans)\r\n-> 1009 f = self._open(\r\n 1010 path,\r\n 1011 mode=mode,\r\n 1012 block_size=block_size,\r\n 1013 autocommit=ac,\r\n 1014 cache_options=cache_options,\r\n 1015 **kwargs,\r\n 1016 )\r\n 1017 if compression is not None:\r\n 1018 from fsspec.compression import compr\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/implementations/zip.py:96, in ZipFileSystem._open(self, path, mode, block_size, autocommit, cache_options, **kwargs)\r\n 94 if mode != \"rb\":\r\n 95 raise NotImplementedError\r\n---> 96 info = self.info(path)\r\n 97 out = self.zip.open(path, \"r\")\r\n 98 out.size = info[\"size\"]\r\n\r\nFile /opt/miniconda3/envs/hugginface/lib/python3.9/site-packages/fsspec/archive.py:42, in AbstractArchiveFileSystem.info(self, path, **kwargs)\r\n 40 return self.dir_cache[path + \"/\"]\r\n 41 else:\r\n---> 42 raise FileNotFoundError(path)\r\n\r\nFileNotFoundError:\r\n```\r\n\r\n</details>\r\n\r\nIs this a bug? Or am I just doing it wrong...",
"I'm still messing around with that dataset, so the data might have moved. I currently have each year of MRMS precipitation rate data as it's own zarr, but as they are quite large (on order of 100GB each) I'm working to split them into single days, and as such they are still being moved around, I was just trying to get a proof of concept working originally. ",
"I've mostly finished rearranging the data now and uploading some more, so this works now:\r\n```python\r\nimport datasets\r\nds = datasets.load_dataset(\"openclimatefix/mrms\", streaming=True, split=\"train\")\r\nitem = next(iter(ds))\r\nprint(item.keys())\r\nprint(item[\"timestamp\"])\r\n```\r\n\r\nThe MRMS data now goes most of 2016-2022, with quite a few gaps I'm working on filling in"
] |
https://api.github.com/repos/huggingface/datasets/issues/4568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4568/comments | https://api.github.com/repos/huggingface/datasets/issues/4568/events | https://github.com/huggingface/datasets/issues/4568 | 1,284,655,624 | I_kwDODunzps5MkkoI | 4,568 | XNLI cache reload is very slow | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2022-06-25T16:43:56Z | 2022-07-04T14:29:40Z | 2022-07-04T14:29:40Z | null | ### Reproduce
Using `2.3.3.dev0`
`from datasets import load_dataset`
`load_dataset("xnli", "en")`
Turn off Internet
`load_dataset("xnli", "en")`
I cancelled the second `load_dataset` eventually cuz it took super long. It would be great to have something to specify e.g. `only_load_from_cache` and avoid the library trying to download when there is no Internet. If I leave it running it works but takes way longer than when there is Internet. I would expect loading from cache to take the same amount of time regardless of whether there is Internet.
```
---------------------------------------------------------------------------
gaierror Traceback (most recent call last)
/opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self)
174 conn = connection.create_connection(
--> 175 (self._dns_host, self.port), self.timeout, **extra_kw
176 )
/opt/conda/lib/python3.7/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options)
71
---> 72 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
73 af, socktype, proto, canonname, sa = res
/opt/conda/lib/python3.7/socket.py in getaddrinfo(host, port, family, type, proto, flags)
751 addrlist = []
--> 752 for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
753 af, socktype, proto, canonname, sa = res
gaierror: [Errno -3] Temporary failure in name resolution
During handling of the above exception, another exception occurred:
KeyboardInterrupt Traceback (most recent call last)
/tmp/ipykernel_33/3594208039.py in <module>
----> 1 load_dataset("xnli", "en")
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1673 revision=revision,
1674 use_auth_token=use_auth_token,
-> 1675 **config_kwargs,
1676 )
1677
/opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1494 download_mode=download_mode,
1495 data_dir=data_dir,
-> 1496 data_files=data_files,
1497 )
1498
/opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1182 download_config=download_config,
1183 download_mode=download_mode,
-> 1184 dynamic_modules_path=dynamic_modules_path,
1185 ).get_module()
1186 elif path.count("/") == 1: # community dataset on the Hub
/opt/conda/lib/python3.7/site-packages/datasets/load.py in __init__(self, name, revision, download_config, download_mode, dynamic_modules_path)
506 self.dynamic_modules_path = dynamic_modules_path
507 assert self.name.count("/") == 0
--> 508 increase_load_count(name, resource_type="dataset")
509
510 def download_loading_script(self, revision: Optional[str]) -> str:
/opt/conda/lib/python3.7/site-packages/datasets/load.py in increase_load_count(name, resource_type)
166 if not config.HF_DATASETS_OFFLINE and config.HF_UPDATE_DOWNLOAD_COUNTS:
167 try:
--> 168 head_hf_s3(name, filename=name + ".py", dataset=(resource_type == "dataset"))
169 except Exception:
170 pass
/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in head_hf_s3(identifier, filename, use_cdn, dataset, max_retries)
93 return http_head(
94 hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset),
---> 95 max_retries=max_retries,
96 )
97
/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries)
445 allow_redirects=allow_redirects,
446 timeout=timeout,
--> 447 max_retries=max_retries,
448 )
449 return response
/opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params)
366 tries += 1
367 try:
--> 368 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params)
369 success = True
370 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err:
/opt/conda/lib/python3.7/site-packages/requests/api.py in request(method, url, **kwargs)
59 # cases, and look like a memory leak in others.
60 with sessions.Session() as session:
---> 61 return session.request(method=method, url=url, **kwargs)
62
63
/opt/conda/lib/python3.7/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
527 }
528 send_kwargs.update(settings)
--> 529 resp = self.send(prep, **send_kwargs)
530
531 return resp
/opt/conda/lib/python3.7/site-packages/requests/sessions.py in send(self, request, **kwargs)
643
644 # Send the request
--> 645 r = adapter.send(request, **kwargs)
646
647 # Total elapsed time of the request (approximately)
/opt/conda/lib/python3.7/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies)
448 decode_content=False,
449 retries=self.max_retries,
--> 450 timeout=timeout
451 )
452
/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
708 body=body,
709 headers=headers,
--> 710 chunked=chunked,
711 )
712
/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
384 # Trigger any extra validation we need to do.
385 try:
--> 386 self._validate_conn(conn)
387 except (SocketTimeout, BaseSSLError) as e:
388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout.
/opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn)
1038 # Force connect early to allow us to validate the connection.
1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock`
-> 1040 conn.connect()
1041
1042 if not conn.is_verified:
/opt/conda/lib/python3.7/site-packages/urllib3/connection.py in connect(self)
356 def connect(self):
357 # Add certificate verification
--> 358 self.sock = conn = self._new_conn()
359 hostname = self.host
360 tls_in_tls = False
/opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self)
173 try:
174 conn = connection.create_connection(
--> 175 (self._dns_host, self.port), self.timeout, **extra_kw
176 )
177
KeyboardInterrupt:
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4568/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4568/timeline | null | completed | null | null | false | [
"Hi,\r\nCould you tell us how you are running this code?\r\nI tested on my machine (M1 Mac). And it is running fine both on and off internet.\r\n\r\n<img width=\"1033\" alt=\"Screen Shot 2022-07-03 at 1 32 25 AM\" src=\"https://user-images.githubusercontent.com/8711912/177026364-4ad7cedb-e524-4513-97f7-7961bbb34c90.png\">\r\nTested on both stable and dev version. ",
"Sure, I was running it on a Linux machine.\r\nI found that if I turn the Internet off, it would still try to make a HTTPS call which would slow down the cache loading. If you can't reproduce then we can close the issue.",
"Hi @Muennighoff! You can set the env variable `HF_DATASETS_OFFLINE` to `1` to avoid this behavior in offline mode. More info is available [here](https://huggingface.co/docs/datasets/master/en/loading#offline)."
] |
https://api.github.com/repos/huggingface/datasets/issues/2329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2329/comments | https://api.github.com/repos/huggingface/datasets/issues/2329/events | https://github.com/huggingface/datasets/pull/2329 | 877,924,198 | MDExOlB1bGxSZXF1ZXN0NjMxODA3MTk0 | 2,329 | Add cache dir for in-memory datasets | [] | closed | false | null | 7 | 2021-05-06T19:35:32Z | 2021-06-08T19:46:48Z | 2021-06-08T19:06:46Z | null | Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2329/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2329/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2329.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2329",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2329.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2329"
} | true | [
"Yes, having `cache_dir` as an attribute looks cleaner.\r\n\r\n\r\n\r\n",
"Good job! Looking forward to this new feature! 🥂",
"@lhoestq Sorry for the late reply. Yes, I'll start working on tests. Thanks for the detailed explanation of the current issues with caching (like the idea of adding the `use_caching` parameter to `load_dataset`) ",
"@lhoestq Sure. I'm aware this is a high-priority issue to some extent, so feel free to take over.\r\n\r\nFew suggestions I have:\r\n* there is a slight difference between setting `use_caching` to `False` in `load_dataset` and disabling caching globally with `set_caching_enabled(False)` because the former will never execute the following code (`self._cache_dir` is always `False`): \r\nhttps://github.com/huggingface/datasets/blob/c231abdb174987419bbde3360b5b9d6a4672c736/src/datasets/arrow_dataset.py#L1807-L1824\r\n, so I'm just checking whether this is intended (if yes, maybe the docs should mention this) or not?\r\n* think we should add the `use_caching` parameter to every method that has the `keep_in_memory` (and `in_memory` 😃) parameter in its signature for better consistency, but I say let's address this in a separate PR. IMO we need one more PR that will deal exclusively with consistency in the caching logic.",
"Hi @mariosasko \r\nWe discussed internally and we think that this feature might not be the direction we're doing to take for these reasons:\r\n\r\n- it goes against our simple definition of caching: on-disk == uses file cache, and in-memory == nothing is written to disk. I think it adds too much complexity just for a minimal flexibility addition\r\n- there are a few edge cases which are really confusing:\r\n - map on an in memory dataset with a cache_file_name specified by the user -> should the result be in memory or from disk ?\r\n - it would require a special cache directory just for in memory datasets, since they don’t have a preferred directory for caching\r\n- it would break a lot of stuff and would require to rewrite significant parts of the core code and the tests\r\n\r\n\r\nSo in the end we're probably going to close this PR.\r\nLet me know what you think, and thank you anyway for your help on this !",
"Hi,\r\n\r\nI'm fine with that. I agree this adds too much complexity. Btw, I like the idea of reverting default in-memory for small datasets that led to this PR.",
"Superseded by #2460 (to close issue #2458)."
] |
https://api.github.com/repos/huggingface/datasets/issues/1530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1530/comments | https://api.github.com/repos/huggingface/datasets/issues/1530/events | https://github.com/huggingface/datasets/pull/1530 | 764,749,507 | MDExOlB1bGxSZXF1ZXN0NTM4NjY4ODI3 | 1,530 | add indonlu benchmark datasets | [] | closed | false | null | 0 | 2020-12-13T01:56:09Z | 2020-12-16T11:11:43Z | 2020-12-16T11:11:43Z | null | The IndoNLU benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding systems for the Indonesian language. There are 12 datasets in IndoNLU.
This is a new clean PR from [#1322](https://github.com/huggingface/datasets/pull/1322) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1530/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1530/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1530.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1530",
"merged_at": "2020-12-16T11:11:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1530.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1530"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1209/comments | https://api.github.com/repos/huggingface/datasets/issues/1209/events | https://github.com/huggingface/datasets/pull/1209 | 757,965,934 | MDExOlB1bGxSZXF1ZXN0NTMzMjI1NzMw | 1,209 | [AfriBooms] Dataset exists already | [] | closed | false | null | 2 | 2020-12-06T16:35:13Z | 2020-12-07T16:52:24Z | 2020-12-07T16:52:23Z | null | When trying to add "AfriBooms": https://docs.google.com/spreadsheets/d/12ShVow0M6RavnzbBEabm5j5dv12zBaf0y-niwEPPlo4/edit#gid=1386399609 I noticed that the dataset exists already as a config of Universal Dependencies (universal_dependencies.py). I checked and the data exactly matches so that the new data link does not give any new data.
This PR improves the config's description a bit by linking to the paper. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1209/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1209/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1209.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1209",
"merged_at": "2020-12-07T16:52:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1209.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1209"
} | true | [
"It's so cool seeing all these datasets fly by and see how they are still of interest. I did my internship at the research group of Liesbeth Augustinus et al. They're a very kind group of people!",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/5447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5447/comments | https://api.github.com/repos/huggingface/datasets/issues/5447/events | https://github.com/huggingface/datasets/pull/5447 | 1,550,599,193 | PR_kwDODunzps5IM0Nu | 5,447 | Fix CI by temporarily pinning fsspec < 2023.1.0 | [] | closed | false | null | 2 | 2023-01-20T10:11:02Z | 2023-01-20T10:38:13Z | 2023-01-20T10:28:43Z | null | Temporarily pin fsspec < 2023.1.0
Fix #5445. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5447/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5447/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5447.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5447",
"merged_at": "2023-01-20T10:28:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5447.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5447"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.011875 / 0.011353 (0.000522) | 0.008188 / 0.011008 (-0.002821) | 0.131137 / 0.038508 (0.092629) | 0.038127 / 0.023109 (0.015018) | 0.383864 / 0.275898 (0.107966) | 0.458617 / 0.323480 (0.135137) | 0.010989 / 0.007986 (0.003003) | 0.004892 / 0.004328 (0.000563) | 0.101955 / 0.004250 (0.097704) | 0.045081 / 0.037052 (0.008029) | 0.409768 / 0.258489 (0.151279) | 0.446597 / 0.293841 (0.152756) | 0.058588 / 0.128546 (-0.069958) | 0.020872 / 0.075646 (-0.054774) | 0.432982 / 0.419271 (0.013711) | 0.075875 / 0.043533 (0.032342) | 0.380923 / 0.255139 (0.125784) | 0.432994 / 0.283200 (0.149795) | 0.122678 / 0.141683 (-0.019005) | 1.857865 / 1.452155 (0.405710) | 1.927801 / 1.492716 (0.435085) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212941 / 0.018006 (0.194935) | 0.527977 / 0.000490 (0.527488) | 0.002996 / 0.000200 (0.002797) | 0.000105 / 0.000054 (0.000051) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030046 / 0.037411 (-0.007366) | 0.126384 / 0.014526 (0.111858) | 0.138307 / 0.176557 (-0.038250) | 0.185338 / 0.737135 (-0.551797) | 0.144733 / 0.296338 (-0.151606) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.627096 / 0.215209 (0.411887) | 6.418014 / 2.077655 (4.340360) | 2.547675 / 1.504120 (1.043555) | 2.195552 / 1.541195 (0.654357) | 2.200377 / 1.468490 (0.731887) | 1.289935 / 4.584777 (-3.294842) | 5.670839 / 3.745712 (1.925127) | 5.252597 / 5.269862 (-0.017265) | 2.878470 / 4.565676 (-1.687207) | 0.143754 / 0.424275 (-0.280521) | 0.014814 / 0.007607 (0.007207) | 0.810073 / 0.226044 (0.584028) | 8.183757 / 2.268929 (5.914829) | 3.375525 / 55.444624 (-52.069099) | 2.594048 / 6.876477 (-4.282428) | 2.598095 / 2.142072 (0.456023) | 1.554493 / 4.805227 (-3.250734) | 0.263159 / 6.500664 (-6.237505) | 0.089822 / 0.075469 (0.014353) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.660847 / 1.841788 (-0.180941) | 18.434283 / 8.074308 (10.359975) | 21.764887 / 10.191392 (11.573495) | 0.264524 / 0.680424 (-0.415900) | 0.048519 / 0.534201 (-0.485682) | 0.587468 / 0.579283 (0.008185) | 0.634142 / 0.434364 (0.199778) | 0.675374 / 0.540337 (0.135037) | 0.777510 / 1.386936 (-0.609426) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010021 / 0.011353 (-0.001332) | 0.006207 / 0.011008 (-0.004801) | 0.130490 / 0.038508 (0.091982) | 0.037957 / 0.023109 (0.014848) | 0.489381 / 0.275898 (0.213483) | 0.536522 / 0.323480 (0.213042) | 0.008611 / 0.007986 (0.000626) | 0.004894 / 0.004328 (0.000565) | 0.101617 / 0.004250 (0.097367) | 0.052629 / 0.037052 (0.015577) | 0.509211 / 0.258489 (0.250721) | 0.545023 / 0.293841 (0.251182) | 0.057468 / 0.128546 (-0.071078) | 0.023393 / 0.075646 (-0.052253) | 0.431408 / 0.419271 (0.012137) | 0.064967 / 0.043533 (0.021434) | 0.495261 / 0.255139 (0.240122) | 0.527098 / 0.283200 (0.243898) | 0.113172 / 0.141683 (-0.028511) | 1.937072 / 1.452155 (0.484918) | 2.048413 / 1.492716 (0.555697) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.245406 / 0.018006 (0.227399) | 0.526772 / 0.000490 (0.526283) | 0.004379 / 0.000200 (0.004179) | 0.000114 / 0.000054 (0.000060) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031785 / 0.037411 (-0.005626) | 0.130949 / 0.014526 (0.116424) | 0.145660 / 0.176557 (-0.030896) | 0.186991 / 0.737135 (-0.550144) | 0.151000 / 0.296338 (-0.145338) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.708643 / 0.215209 (0.493434) | 7.179252 / 2.077655 (5.101597) | 3.143375 / 1.504120 (1.639255) | 2.714298 / 1.541195 (1.173103) | 2.773441 / 1.468490 (1.304951) | 1.312821 / 4.584777 (-3.271956) | 5.798396 / 3.745712 (2.052684) | 3.253215 / 5.269862 (-2.016646) | 2.147260 / 4.565676 (-2.418416) | 0.154673 / 0.424275 (-0.269602) | 0.014918 / 0.007607 (0.007311) | 0.860618 / 0.226044 (0.634573) | 8.774455 / 2.268929 (6.505527) | 3.925020 / 55.444624 (-51.519604) | 3.139361 / 6.876477 (-3.737115) | 3.208883 / 2.142072 (1.066810) | 1.547305 / 4.805227 (-3.257922) | 0.268814 / 6.500664 (-6.231850) | 0.084578 / 0.075469 (0.009109) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.694990 / 1.841788 (-0.146798) | 18.619183 / 8.074308 (10.544875) | 21.929886 / 10.191392 (11.738494) | 0.265763 / 0.680424 (-0.414661) | 0.028325 / 0.534201 (-0.505876) | 0.552910 / 0.579283 (-0.026373) | 0.616864 / 0.434364 (0.182500) | 0.637858 / 0.540337 (0.097521) | 0.744508 / 1.386936 (-0.642428) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5633 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5633/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5633/comments | https://api.github.com/repos/huggingface/datasets/issues/5633/events | https://github.com/huggingface/datasets/issues/5633 | 1,621,469,970 | I_kwDODunzps5gpasS | 5,633 | Cannot import datasets | [] | closed | false | null | 1 | 2023-03-13T13:14:44Z | 2023-03-13T17:54:19Z | 2023-03-13T17:54:19Z | null | ### Describe the bug
Hi,
I cannot even import the library :( I installed it by running:
```
$ conda install datasets
```
Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran:
```
$ conda remove datasets
$ conda install -c huggingface datasets
```
Please see 'steps to reproduce the bug' for the specific error, as steps to reproduce is just importing the library
### Steps to reproduce the bug
```
$ python3
Python 3.8.15 (default, Nov 24 2022, 15:19:38)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import datasets
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/__init__.py", line 33, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 59, in <module>
from .arrow_reader import ArrowReader
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/arrow_reader.py", line 27, in <module>
import pyarrow.parquet as pq
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/parquet/__init__.py", line 20, in <module>
from .core import *
File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/parquet/core.py", line 37, in <module>
from pyarrow._parquet import (ParquetReader, Statistics, # noqa
ImportError: cannot import name 'FileEncryptionProperties' from 'pyarrow._parquet' (/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/_parquet.cpython-38-x86_64-linux-gnu.so)
```
### Expected behavior
I would expect for the statement `import datasets` to cause no error
### Environment info
Output of `conda list`:
```
# packages in environment at /home/jack/.conda/envs/pbalawender_zpp:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
_openmp_mutex 5.1 1_gnu
abseil-cpp 20210324.2 h2531618_0
advertools 0.13.2 pypi_0 pypi
aiofiles 0.8.0 pypi_0 pypi
aiohttp 3.8.3 py38h5eee18b_0
aiosignal 1.2.0 pyhd3eb1b0_0
aiosqlite 0.17.0 pypi_0 pypi
anyio 3.6.2 pypi_0 pypi
aquirdturtle-collapsible-headings 3.1.0 pypi_0 pypi
argon2-cffi 21.3.0 pypi_0 pypi
argon2-cffi-bindings 21.2.0 pypi_0 pypi
arrow 1.2.3 pypi_0 pypi
arrow-cpp 3.0.0 py38h6b21186_4
asttokens 2.2.0 pypi_0 pypi
async-timeout 4.0.2 py38h06a4308_0
attrs 22.1.0 py38h06a4308_0
automat 22.10.0 pypi_0 pypi
aws-c-common 0.4.57 he6710b0_1
aws-c-event-stream 0.1.6 h2531618_5
aws-checksums 0.1.9 he6710b0_0
aws-sdk-cpp 1.8.185 hce553d0_0
babel 2.11.0 pypi_0 pypi
backcall 0.2.0 pyhd3eb1b0_0
beautifulsoup4 4.11.1 pypi_0 pypi
blas 1.0 mkl
bleach 5.0.1 pypi_0 pypi
boost-cpp 1.73.0 h27cfd23_11
bottleneck 1.3.5 py38h7deecbd_0
brotli 1.0.9 h5eee18b_7
brotli-bin 1.0.9 h5eee18b_7
brotlipy 0.7.0 py38h27cfd23_1003
bzip2 1.0.8 h7b6447c_0
c-ares 1.18.1 h7f8727e_0
ca-certificates 2023.01.10 h06a4308_0
certifi 2022.9.24 pypi_0 pypi
cffi 1.15.1 py38h5eee18b_3
charset-normalizer 2.1.1 pypi_0 pypi
click 8.1.3 pypi_0 pypi
constantly 15.1.0 pypi_0 pypi
contourpy 1.0.6 pypi_0 pypi
cryptography 38.0.4 pypi_0 pypi
cssselect 1.2.0 pypi_0 pypi
cudatoolkit 10.1.243 h8cb64d8_10 conda-forge
cycler 0.11.0 pypi_0 pypi
dacite 1.6.0 pypi_0 pypi
dataclasses 0.8 pyh6d0b6a4_7
datasets 1.18.4 py_0 huggingface
datetime 4.7 pypi_0 pypi
debugpy 1.6.4 pypi_0 pypi
decorator 5.1.1 pyhd3eb1b0_0
defusedxml 0.7.1 pypi_0 pypi
dill 0.3.6 py38h06a4308_0
docker-pycreds 0.4.0 pypi_0 pypi
double-conversion 3.1.5 he6710b0_1
entrypoints 0.4 py38h06a4308_0
executing 0.8.3 pyhd3eb1b0_0
filelock 3.8.0 pypi_0 pypi
flake8 6.0.0 pypi_0 pypi
flask 2.1.3 py38h06a4308_0
flit-core 3.6.0 pyhd3eb1b0_0
fonttools 4.38.0 pypi_0 pypi
fqdn 1.5.1 pypi_0 pypi
freetype 2.12.1 h4a9f257_0
frozenlist 1.3.3 py38h5eee18b_0
fsspec 2022.11.0 py38h06a4308_0
gensim 4.2.0 pypi_0 pypi
gflags 2.2.2 he6710b0_0
giflib 5.2.1 h5eee18b_3
gitdb 4.0.10 pypi_0 pypi
gitpython 3.1.30 pypi_0 pypi
glog 0.5.0 h2531618_0
grpc-cpp 1.39.0 hae934f6_5
huggingface-hub 0.11.1 pypi_0 pypi
huggingface_hub 0.13.1 py_0 huggingface
hyperlink 21.0.0 pypi_0 pypi
icu 58.2 he6710b0_3
idna 3.4 py38h06a4308_0
importlib-metadata 5.1.0 pypi_0 pypi
importlib_metadata 4.11.3 hd3eb1b0_0
importlib_resources 5.2.0 pyhd3eb1b0_1
incremental 22.10.0 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561
ipykernel 6.17.1 pyh210e3f2_0 conda-forge
ipython 8.7.0 pypi_0 pypi
ipython-genutils 0.2.0 pypi_0 pypi
ipywidgets 8.0.2 pyhd8ed1ab_1 conda-forge
isoduration 20.11.0 pypi_0 pypi
itemadapter 0.7.0 pypi_0 pypi
itemloaders 1.0.6 pypi_0 pypi
itsdangerous 2.0.1 pyhd3eb1b0_0
jedi 0.18.2 pypi_0 pypi
jinja2 3.1.2 py38h06a4308_0
jmespath 1.0.1 pypi_0 pypi
joblib 1.2.0 pypi_0 pypi
jpeg 9b h024ee3a_2
json5 0.9.10 pypi_0 pypi
jsonpickle 3.0.0 pypi_0 pypi
jsonpointer 2.3 pypi_0 pypi
jsonschema 4.17.3 py38h06a4308_0
jupyter-core 5.1.0 pypi_0 pypi
jupyter-events 0.5.0 pypi_0 pypi
jupyter-server 1.23.3 pypi_0 pypi
jupyter-server-fileid 0.6.0 pypi_0 pypi
jupyter-server-ydoc 0.4.0 pypi_0 pypi
jupyter-ydoc 0.2.2 pypi_0 pypi
jupyter_client 7.4.9 py38h06a4308_0
jupyter_core 5.2.0 py38h06a4308_0
jupyterlab 3.6.0a4 pypi_0 pypi
jupyterlab-pygments 0.2.2 pypi_0 pypi
jupyterlab-server 2.16.3 pypi_0 pypi
jupyterlab_widgets 3.0.3 pyhd8ed1ab_0 conda-forge
kiwisolver 1.4.4 pypi_0 pypi
krb5 1.19.4 h568e23c_0
lcms2 2.12 h3be6417_0
ld_impl_linux-64 2.38 h1181459_1
libboost 1.73.0 h3ff78a5_11
libbrotlicommon 1.0.9 h5eee18b_7
libbrotlidec 1.0.9 h5eee18b_7
libbrotlienc 1.0.9 h5eee18b_7
libcurl 7.88.1 h91b91d3_0
libedit 3.1.20221030 h5eee18b_0
libev 4.33 h7f8727e_1
libevent 2.1.12 h8f2d780_0
libffi 3.4.2 h6a678d5_6
libgcc-ng 11.2.0 h1234567_1
libgomp 11.2.0 h1234567_1
libnghttp2 1.46.0 hce63b2e_0
libpng 1.6.39 h5eee18b_0
libprotobuf 3.17.2 h4ff587b_1
libsodium 1.0.18 h7b6447c_0
libssh2 1.10.0 h8f2d780_0
libstdcxx-ng 11.2.0 h1234567_1
libthrift 0.14.2 hcc01f38_0
libtiff 4.1.0 h2733197_1
libuv 1.44.2 h5eee18b_0
libwebp 1.2.0 h89dd481_0
lz4-c 1.9.4 h6a678d5_0
markupsafe 2.1.1 py38h7f8727e_0
matplotlib 3.6.2 pypi_0 pypi
matplotlib-inline 0.1.6 py38h06a4308_0
mccabe 0.7.0 pypi_0 pypi
mistune 2.0.4 pypi_0 pypi
mkl 2021.4.0 h06a4308_640
mkl-service 2.4.0 py38h7f8727e_0
mkl_fft 1.3.1 py38hd3c417c_0
mkl_random 1.2.2 py38h51133e4_0
morfeusz2 1.99.6 pypi_0 pypi
multidict 6.0.2 py38h5eee18b_0
multiprocess 0.70.14 py38h06a4308_0
nbclassic 0.4.8 pypi_0 pypi
nbclient 0.7.2 pypi_0 pypi
nbconvert 7.2.5 pypi_0 pypi
nbformat 5.7.0 py38h06a4308_0
ncurses 6.4 h6a678d5_0
nest-asyncio 1.5.6 py38h06a4308_0
ninja 1.10.2 h06a4308_5
ninja-base 1.10.2 hd09550d_5
notebook 6.5.2 pypi_0 pypi
notebook-shim 0.2.2 pypi_0 pypi
numexpr 2.8.4 py38he184ba9_0
numpy 1.23.5 py38h14f4228_0
numpy-base 1.23.5 py38h31eccc5_0
oauthlib 3.2.2 pypi_0 pypi
opencv-python 4.6.0.66 pypi_0 pypi
openssl 1.1.1t h7f8727e_0
orc 1.6.9 ha97a36c_3
packaging 22.0 py38h06a4308_0
pandas 1.5.2 pypi_0 pypi
pandocfilters 1.5.0 pypi_0 pypi
parsel 1.7.0 pypi_0 pypi
parso 0.8.3 pyhd3eb1b0_0
pathlib 1.0.1 pypi_0 pypi
pathtools 0.1.2 pypi_0 pypi
pexpect 4.8.0 pyhd3eb1b0_3
pickleshare 0.7.5 pyhd3eb1b0_1003
pillow 9.3.0 pypi_0 pypi
pip 22.2.2 py38h06a4308_0
pkgutil-resolve-name 1.3.10 py38h06a4308_0
platformdirs 2.5.4 pypi_0 pypi
prometheus-client 0.15.0 pypi_0 pypi
promise 2.3 pypi_0 pypi
prompt-toolkit 3.0.33 pypi_0 pypi
protego 0.2.1 pypi_0 pypi
protobuf 4.21.12 pypi_0 pypi
psutil 5.9.0 py38h5eee18b_0
ptyprocess 0.7.0 pyhd3eb1b0_2
pure_eval 0.2.2 pyhd3eb1b0_0
pyarrow 10.0.1 pypi_0 pypi
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pycodestyle 2.10.0 pypi_0 pypi
pycparser 2.21 pyhd3eb1b0_0
pydispatcher 2.0.6 pypi_0 pypi
pyflakes 3.0.1 pypi_0 pypi
pygments 2.11.2 pyhd3eb1b0_0
pyopenssl 22.1.0 pypi_0 pypi
pyrsistent 0.18.0 py38heee7806_0
pysocks 1.7.1 py38h06a4308_0
python 3.8.15 h7a1cb2a_2
python-dateutil 2.8.2 pyhd3eb1b0_0
python-dotenv 0.21.0 pypi_0 pypi
python-fastjsonschema 2.16.2 py38h06a4308_0
python-json-logger 2.0.4 pypi_0 pypi
python-xxhash 2.0.2 py38h5eee18b_1
pytorch 1.7.1 py3.8_cuda10.1.243_cudnn7.6.3_0 pytorch
pytz 2022.6 pypi_0 pypi
pyyaml 6.0 py38h5eee18b_1
pyzmq 23.2.0 py38h6a678d5_0
queuelib 1.6.2 pypi_0 pypi
re2 2022.04.01 h295c915_0
readline 8.2 h5eee18b_0
regex 2022.10.31 pypi_0 pypi
requests 2.28.1 py38h06a4308_0
requests-file 1.5.1 pypi_0 pypi
requests-oauthlib 1.3.1 pypi_0 pypi
rfc3339-validator 0.1.4 pypi_0 pypi
rfc3986-validator 0.1.1 pypi_0 pypi
scikit-learn 1.1.3 pypi_0 pypi
scipy 1.9.3 pypi_0 pypi
scrapy 2.7.1 pypi_0 pypi
seaborn 0.12.1 pypi_0 pypi
send2trash 1.8.0 pypi_0 pypi
sentry-sdk 1.12.1 pypi_0 pypi
service-identity 21.1.0 pypi_0 pypi
setproctitle 1.3.2 pypi_0 pypi
setuptools 65.6.3 pypi_0 pypi
shortuuid 1.0.11 pypi_0 pypi
six 1.16.0 pyhd3eb1b0_1
smart-open 6.2.0 pypi_0 pypi
smmap 5.0.0 pypi_0 pypi
snappy 1.1.9 h295c915_0
sniffio 1.3.0 pypi_0 pypi
soupsieve 2.3.2.post1 pypi_0 pypi
sqlite 3.40.1 h5082296_0
stack-data 0.6.2 pypi_0 pypi
stack_data 0.2.0 pyhd3eb1b0_0
terminado 0.17.0 pypi_0 pypi
threadpoolctl 3.1.0 pypi_0 pypi
tinycss2 1.2.1 pypi_0 pypi
tk 8.6.12 h1ccaba5_0
tldextract 3.4.0 pypi_0 pypi
tokenizers 0.13.2 pypi_0 pypi
tomli 2.0.1 pypi_0 pypi
torchvision 0.8.2 py38_cu101 pytorch
tornado 6.2 py38h5eee18b_0
tqdm 4.64.1 py38h06a4308_0
traitlets 5.6.0 pypi_0 pypi
transformers 4.25.1 pypi_0 pypi
tweepy 4.12.1 pypi_0 pypi
twisted 22.10.0 pypi_0 pypi
twython 3.9.1 pypi_0 pypi
typing-extensions 4.4.0 py38h06a4308_0
typing_extensions 4.4.0 py38h06a4308_0
uri-template 1.2.0 pypi_0 pypi
uriparser 0.9.3 he6710b0_1
urllib3 1.26.13 pypi_0 pypi
utf8proc 2.6.1 h27cfd23_0
w3lib 2.1.0 pypi_0 pypi
wandb 0.13.7 pypi_0 pypi
wcwidth 0.2.5 pyhd3eb1b0_0
webcolors 1.12 pypi_0 pypi
webencodings 0.5.1 pypi_0 pypi
websocket-client 1.4.2 pypi_0 pypi
werkzeug 2.2.2 py38h06a4308_0
wheel 0.38.4 py38h06a4308_0
widgetsnbextension 4.0.3 py38h06a4308_0
xxhash 0.8.0 h7f8727e_3
xz 5.2.10 h5eee18b_1
y-py 0.5.4 pypi_0 pypi
yaml 0.2.5 h7b6447c_0
yarl 1.8.1 py38h5eee18b_0
ypy-websocket 0.5.0 pypi_0 pypi
zeromq 4.3.4 h2531618_0
zipp 3.11.0 py38h06a4308_0
zlib 1.2.13 h5eee18b_0
zope-interface 5.5.2 pypi_0 pypi
zstd 1.4.9 haebb681_0
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5633/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5633/timeline | null | completed | null | null | false | [
"Okay, the issue was likely caused by mixing `conda` and `pip` usage - I forgot that I have already used `pip` in this environment previously and that it was 'spoiled' because of it. Creating another environment and installing `datasets` by pip with other packages from the `requirements.txt` file solved the problem."
] |
https://api.github.com/repos/huggingface/datasets/issues/1630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1630/comments | https://api.github.com/repos/huggingface/datasets/issues/1630/events | https://github.com/huggingface/datasets/issues/1630 | 774,332,129 | MDU6SXNzdWU3NzQzMzIxMjk= | 1,630 | Adding UKP Argument Aspect Similarity Corpus | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 3 | 2020-12-24T11:01:31Z | 2022-10-05T12:36:12Z | 2022-10-05T12:36:12Z | null | Hi, this would be great to have this dataset included.
## Adding a Dataset
- **Name:** UKP Argument Aspect Similarity Corpus
- **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sentence pair was annotated via crowdsourcing as either “high similarity”, “some similarity”, “no similarity” or “not related” with respect to the topic.
- **Paper:** https://www.aclweb.org/anthology/P19-1054/
- **Data:** https://tudatalib.ulb.tu-darmstadt.de/handle/tudatalib/1998
- **Motivation:** this is one of the datasets currently used frequently in recent adapter papers like https://arxiv.org/pdf/2005.00247.pdf
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
Thank you | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1630/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1630/timeline | null | completed | null | null | false | [
"Adding a link to the guide on adding a dataset if someone want to give it a try: https://github.com/huggingface/datasets#add-a-new-dataset-to-the-hub\r\n\r\nwe should add this guide to the issue template @lhoestq ",
"thanks @thomwolf , this is added now. The template is correct, sorry my mistake not to include it. ",
"Available here: https://huggingface.co/datasets/UKPLab/UKP_ASPECT"
] |
https://api.github.com/repos/huggingface/datasets/issues/365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/365/comments | https://api.github.com/repos/huggingface/datasets/issues/365/events | https://github.com/huggingface/datasets/issues/365 | 653,845,964 | MDU6SXNzdWU2NTM4NDU5NjQ= | 365 | How to augment data ? | [] | closed | false | null | 6 | 2020-07-09T07:52:37Z | 2020-07-10T09:12:07Z | 2020-07-10T08:22:15Z | null | Is there any clean way to augment data ?
For now my work-around is to use batched map, like this :
```python
def aug(samples):
# Simply copy the existing data to have x2 amount of data
for k, v in samples.items():
samples[k].extend(v)
return samples
dataset = dataset.map(aug, batched=True)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/365/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/365/timeline | null | completed | null | null | false | [
"Using batched map is probably the easiest way at the moment.\r\nWhat kind of augmentation would you like to do ?",
"Some samples in the dataset are too long, I want to divide them in several samples.",
"Using batched map is the way to go then.\r\nWe'll make it clearer in the docs that map could be used for augmentation.\r\n\r\nLet me know if you think there should be another way to do it. Or feel free to close the issue otherwise.",
"It just feels awkward to use map to augment data. Also it means it's not possible to augment data in a non-batched way.\r\n\r\nBut to be honest I have no idea of a good API...",
"Or for non-batched samples, how about returning a tuple ?\r\n\r\n```python\r\ndef aug(sample):\r\n # Simply copy the existing data to have x2 amount of data\r\n return sample, sample\r\n\r\ndataset = dataset.map(aug)\r\n```\r\n\r\nIt feels really natural and easy, but :\r\n\r\n* it means the behavior with batched data is different\r\n* I don't know how doable it is backend-wise\r\n\r\n@lhoestq ",
"As we're working with arrow's columnar format we prefer to play with batches that are dictionaries instead of tuples.\r\nIf we have tuple it implies to re-format the data each time we want to write to arrow, which can lower the speed of map for example.\r\n\r\nIt's also a matter of coherence, as we don't want users to be confused whether they have to return dictionaries for some functions and tuples for others when they're doing batches."
] |
https://api.github.com/repos/huggingface/datasets/issues/4114 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4114/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4114/comments | https://api.github.com/repos/huggingface/datasets/issues/4114/events | https://github.com/huggingface/datasets/issues/4114 | 1,194,855,345 | I_kwDODunzps5HOAux | 4,114 | Allow downloading just some columns of a dataset | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 2 | 2022-04-06T16:38:46Z | 2022-04-07T07:56:26Z | null | null | **Is your feature request related to a problem? Please describe.**
Some people are interested in doing label analysis of a CV dataset without downloading all the images. Downloading the whole dataset does not always makes sense for this kind of use case
**Describe the solution you'd like**
Be able to just download some columns of a dataset, such as doing
```python
load_dataset("huggan/wikiart",columns=["artist", "genre"])
```
Although this might make things a bit complicated in terms of local caching of datasets. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4114/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4114/timeline | null | null | null | null | false | [
"In the general case you can’t always reduce the quantity of data to download, since you can’t parse CSV or JSON data without downloading the whole files right ? ^^ However we could explore this case-by-case I guess",
"Actually for csv pandas has `usecols` which allows loading a subset of columns in a more efficient way afaik, but yes, you're right this might be more complex than I thought."
] |
https://api.github.com/repos/huggingface/datasets/issues/708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/708/comments | https://api.github.com/repos/huggingface/datasets/issues/708/events | https://github.com/huggingface/datasets/issues/708 | 714,020,953 | MDU6SXNzdWU3MTQwMjA5NTM= | 708 | Datasets performance slow? - 6.4x slower than in memory dataset | [] | closed | false | null | 10 | 2020-10-03T06:44:07Z | 2021-02-12T14:13:28Z | 2021-02-12T14:13:28Z | null | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.
For example, in the `yelp_polarity` dataset (560000 datapoints, or 17500 batches of 32), it was taking me 3:31 to just get process the data and get it on the GPU (no model involved). Whereas, the equivalent in-memory dataset would finish in just 0:33.
Is this expected? Given that one of the goals of this project is also accelerate dataset processing, this seems a bit slower than I would expect. I understand the advantages of being able to work on datasets that exceed memory, and that's very exciting to me, but thought I'd open this issue to discuss.
For reference I'm running a AMD Ryzen Threadripper 1900X 8-Core Processor CPU, with 128 GB of RAM and an NVME SSD Samsung 960 EVO. I'm running with an RTX Titan 24GB GPU.
I can see with `iotop` that the dataset gets quickly loaded into the system read buffers, and thus doesn't incur any additional IO reads. Thus in theory, all the data *should* be in RAM, but in my benchmark code below it's still 6.4 times slower.
What am I doing wrong? And is there a way to force the datasets to completely load into memory instead of being memory mapped in cases where you want maximum performance?
At 3:31 for 17500 batches, that's 12ms per batch. Does this 12ms just become insignificant as a proportion of forward and backward passes in practice, and thus it's not worth worrying about this in practice?
In any case, here's my code `benchmark.py`. If you run it with an argument of `memory` it will copy the data into memory before executing the same test.
``` py
import sys
from datasets import load_dataset
from transformers import DataCollatorWithPadding, BertTokenizerFast
from torch.utils.data import DataLoader
from tqdm import tqdm
if __name__ == '__main__':
tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased')
collate_fn = DataCollatorWithPadding(tokenizer, padding=True)
ds = load_dataset('yelp_polarity')
def do_tokenize(x):
return tokenizer(x['text'], truncation=True)
ds = ds.map(do_tokenize, batched=True)
ds.set_format('torch', ['input_ids', 'token_type_ids', 'attention_mask'])
if len(sys.argv) == 2 and sys.argv[1] == 'memory':
# copy to memory - probably a faster way to do this - but demonstrates the point
# approximately 530 batches per second - 17500 batches in 0:33
print('using memory')
_ds = [data for data in tqdm(ds['train'])]
else:
# approximately 83 batches per second - 17500 batches in 3:31
print('using datasets')
_ds = ds['train']
dl = DataLoader(_ds, shuffle=True, collate_fn=collate_fn, batch_size=32, num_workers=4)
for data in tqdm(dl):
for k, v in data.items():
data[k] = v.to('cuda')
```
For reference, my conda environment is [here](https://gist.github.com/05b6101518ff70ed42a858b302a0405d)
Once again, I'm very excited about this library, and how easy it is to load datasets, and to do so without worrying about system memory constraints.
Thanks for all your great work.
| {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/708/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/708/timeline | null | completed | null | null | false | [
"Facing a similar issue here. My model using SQuAD dataset takes about 1h to process with in memory data and more than 2h with datasets directly.",
"And if you use in-memory-data with datasets with `load_dataset(..., keep_in_memory=True)`?",
"Thanks for the tip @thomwolf ! I did not see that flag in the docs. I'll try with that.",
"We should add it indeed and also maybe a specific section with all the tips for maximal speed. What do you think @lhoestq @SBrandeis @yjernite ?",
"By default the datasets loaded with `load_dataset` live on disk.\r\nIt's possible to load them in memory by using some transforms like `.map(..., keep_in_memory=True)`.\r\n\r\nSmall correction to @thomwolf 's comment above: currently we don't have the `keep_in_memory` parameter for `load_dataset` AFAIK but it would be nice to add it indeed :)",
"Yes indeed we should add it!",
"Great! Thanks a lot.\r\n\r\nI did a test using `map(..., keep_in_memory=True)` and also a test using in-memory only data.\r\n\r\n```python\r\nfeatures = dataset.map(tokenize, batched=True, remove_columns=dataset['train'].column_names)\r\nfeatures.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\nfeatures_in_memory = dataset.map(tokenize, batched=True, keep_in_memory=True, remove_columns=dataset['train'].column_names)\r\nfeatures_in_memory.set_format(type='torch', columns=['input_ids', 'token_type_ids', 'attention_mask'])\r\n\r\nin_memory = [features['train'][i] for i in range(len(features['train']))]\r\n```\r\n\r\nFor using the features without any tweak, I got **1min17s** for copying the entire DataLoader to CUDA:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(features['train'], batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nFor using the features mapped with `keep_in_memory=True`, I also got **1min17s** for copying the entire DataLoader to CUDA:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(features_in_memory['train'], batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nAnd for the case using every element in memory, converted from the original dataset, I got **12.5s**:\r\n\r\n```\r\n%%time\r\n\r\nfor i, batch in enumerate(DataLoader(in_memory, batch_size=16, num_workers=4)):\r\n batch['input_ids'].to(device)\r\n```\r\n\r\nTaking a closer look in my SQuAD code, using a profiler, I see a lot of calls to `posix read` api. It seems that it is really reliying on disk, which results in a very high train time.",
"I am having the same issue here. When loading from memory I can get the GPU up to 70% util but when loading after mapping I can only get 40%.\r\n\r\nIn disk:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]')\r\nbook_corpus = book_corpus.map(encode, batched=True, num_proc=20, load_from_cache_file=True, batch_size=2500)\r\nbook_corpus.set_format(type='torch', columns=['text', \"input_ids\", \"attention_mask\", \"token_type_ids\"])\r\n\r\ntraining_args = TrainingArguments(\r\n output_dir=\"./mobile_bert_big\",\r\n overwrite_output_dir=True,\r\n num_train_epochs=1,\r\n per_device_train_batch_size=32,\r\n per_device_eval_batch_size=16,\r\n save_steps=50,\r\n save_total_limit=2,\r\n logging_first_step=True,\r\n warmup_steps=100,\r\n logging_steps=50,\r\n eval_steps=100,\r\n no_cuda=False,\r\n gradient_accumulation_steps=16,\r\n fp16=True)\r\n\r\ntrainer = Trainer(\r\n model=model,\r\n args=training_args,\r\n data_collator=data_collator,\r\n train_dataset=book_corpus,\r\n tokenizer=tokenizer)\r\n```\r\n\r\nIn disk I can only get 0,17 it/s:\r\n`[ 13/28907 01:03 < 46:03:27, 0.17 it/s, Epoch 0.00/1] `\r\n\r\nIf I load it with torch.utils.data.Dataset()\r\n```\r\nclass BCorpusDataset(torch.utils.data.Dataset):\r\n def __init__(self, encodings):\r\n self.encodings = encodings\r\n\r\n def __getitem__(self, idx):\r\n item = [torch.tensor(val[idx]) for key, val in self.encodings.items()][0]\r\n return item\r\n\r\n def __len__(self):\r\n length = [len(val) for key, val in self.encodings.items()][0]\r\n return length\r\n\r\n**book_corpus = book_corpus.select([i for i in range(16*2000)])** # filtering to not have 20% of BC in memory...\r\nbook_corpus = book_corpus(book_corpus)\r\n```\r\nI can get:\r\n` [ 5/62 00:09 < 03:03, 0.31 it/s, Epoch 0.06/1]`\r\n\r\nBut obviously I can not get BookCorpus in memory xD\r\n\r\nEDIT: it is something weird. If i load in disk 1% of bookcorpus:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]')\r\n```\r\n\r\nI can get 0.28 it/s, (the same that in memory) but if I load 20% of bookcorpus:\r\n```\r\nbook_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:20%]')\r\n```\r\nI get again 0.17 it/s. \r\n\r\nI am missing something? I think it is something related to size, and not disk or in-memory.",
"There is a way to increase the batches read from memory? or multiprocessed it? I think that one of two or it is reading with just 1 core o it is reading very small chunks from disk and left my GPU at 0 between batches",
"My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks."
] |
https://api.github.com/repos/huggingface/datasets/issues/2419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2419/comments | https://api.github.com/repos/huggingface/datasets/issues/2419/events | https://github.com/huggingface/datasets/pull/2419 | 904,347,339 | MDExOlB1bGxSZXF1ZXN0NjU1NTA1OTM1 | 2,419 | adds license information for DailyDialog. | [] | closed | false | null | 5 | 2021-05-27T23:03:42Z | 2021-05-31T13:16:52Z | 2021-05-31T13:16:52Z | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2419/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2419/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2419.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2419",
"merged_at": "2021-05-31T13:16:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2419.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2419"
} | true | [
"Thanks! Can you also add it as metadata in the YAML block at the top of the file?\r\n\r\nShould be in the form:\r\n\r\n```\r\nlicenses:\r\n- cc-by-sa-4.0\r\n```",
"seems like we need to add all the other tags ? \r\n``` \r\nif error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE __init__() missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'languages', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n```",
"I'll let @lhoestq or @yjernite chime in (and maybe complete/merge). Thanks!",
"Looks like CircleCI has an incident. Let's wait for it to be working again and make sure the CI is green",
"The remaining error is unrelated to this PR, merging"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/4827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4827/comments | https://api.github.com/repos/huggingface/datasets/issues/4827/events | https://github.com/huggingface/datasets/pull/4827 | 1,335,994,312 | PR_kwDODunzps49B1zi | 4,827 | Add license metadata to pg19 | [] | closed | false | null | 1 | 2022-08-11T13:52:20Z | 2022-08-11T15:01:03Z | 2022-08-11T14:46:38Z | null | As reported over email by Roy Rijkers | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4827/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4827/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4827.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4827",
"merged_at": "2022-08-11T14:46:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4827.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4827"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/47 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/47/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/47/comments | https://api.github.com/repos/huggingface/datasets/issues/47/events | https://github.com/huggingface/datasets/pull/47 | 612,446,493 | MDExOlB1bGxSZXF1ZXN0NDEzMzg5MDc1 | 47 | [PyArrow Feature] fix py arrow bool | [] | closed | false | null | 0 | 2020-05-05T08:56:28Z | 2020-05-05T10:40:28Z | 2020-05-05T10:40:27Z | null | To me it seems that `bool` can only be accessed with `bool_` when looking at the pyarrow types: https://arrow.apache.org/docs/python/api/datatypes.html. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/47/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/47/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/47.diff",
"html_url": "https://github.com/huggingface/datasets/pull/47",
"merged_at": "2020-05-05T10:40:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/47.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/47"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1071/comments | https://api.github.com/repos/huggingface/datasets/issues/1071/events | https://github.com/huggingface/datasets/pull/1071 | 756,447,296 | MDExOlB1bGxSZXF1ZXN0NTMxOTkwNzY1 | 1,071 | add xlrd to test package requirements | [] | closed | false | null | 0 | 2020-12-03T18:32:47Z | 2020-12-03T18:47:16Z | 2020-12-03T18:47:16Z | null | Adds `xlrd` package to the test requirements to handle scripts that use `pandas` to load excel files | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1071/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1071.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1071",
"merged_at": "2020-12-03T18:47:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1071.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1071"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1145/comments | https://api.github.com/repos/huggingface/datasets/issues/1145/events | https://github.com/huggingface/datasets/pull/1145 | 757,477,349 | MDExOlB1bGxSZXF1ZXN0NTMyODQ4MTQx | 1,145 | Add Species-800 | [] | closed | false | null | 4 | 2020-12-04T23:44:51Z | 2022-01-13T03:09:20Z | 2020-12-05T16:35:01Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1145/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1145/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1145.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1145",
"merged_at": "2020-12-05T16:35:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1145.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1145"
} | true | [
"thanks @lhoestq ! I probably need to do the same change in the `SplitGenerator`s (lines 107, 110 and 113). I'll open a new PR for that",
"Yes indeed ! Good catch 👍 \r\nFeel free to open a PR and ping me",
"Hi , theres a issue pulling species_800 dataset , throws google drive error \r\n\r\nerror: \r\n\r\n```\r\nraise ConnectionError(f\"Couldn't reach {url} ({repr(head_error)})\")\r\nConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/ (ReadTimeout(ReadTimeoutError(\"HTTPSConnectionPool(host='drive.google.com', port=443): Read timed out. (read timeout=10)\")))\r\n```\r\ncode: \r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"species_800\")\r\n```",
"Hi @obonyojimmy! I am running the same commands and they work for me. Did you check your internet connection?"
] |
|
https://api.github.com/repos/huggingface/datasets/issues/625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/625/comments | https://api.github.com/repos/huggingface/datasets/issues/625/events | https://github.com/huggingface/datasets/issues/625 | 701,057,799 | MDU6SXNzdWU3MDEwNTc3OTk= | 625 | dtype of tensors should be preserved | [] | closed | false | null | 9 | 2020-09-14T12:38:05Z | 2021-08-17T08:30:04Z | 2021-08-17T08:30:04Z | null | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-have-the-same-dtype-float32/96221)).
As a user I did not expect this bug. I have a `map` function that I call on the Dataset that looks like this:
```python
def preprocess(sentences: List[str]):
token_ids = [[vocab.to_index(t) for t in s.split()] for s in sentences]
sembeddings = stransformer.encode(sentences)
print(sembeddings.dtype)
return {"input_ids": token_ids, "sembedding": sembeddings}
```
Given a list of `sentences` (`List[str]`), it converts those into token_ids on the one hand (list of lists of ints; `List[List[int]]`) and into sentence embeddings on the other (Tensor of dtype `torch.float32`). That means that I actually set the column "sembedding" to a tensor that I as a user expect to be a float32.
It appears though that behind the scenes, this tensor is converted into a **list**. I did not find this documented anywhere but I might have missed it. From a user's perspective this is incredibly important though, because it means you cannot do any data_type or tensor casting yourself in a mapping function! Furthermore, this can lead to issues, as was my case.
My model expected float32 precision, which I thought `sembedding` was because that is what `stransformer.encode` outputs. But behind the scenes this tensor is first cast to a list, and when we then set its format, as below, this column is cast not to float32 but to double precision float64.
```python
dataset.set_format(type="torch", columns=["input_ids", "sembedding"])
```
This happens because apparently there is an intermediate step of casting to a **numpy** array (?) **whose dtype creation/deduction is different from torch dtypes** (see the snippet below). As you can see, this means that the dtype is not preserved: if I got it right, the dataset goes from torch.float32 -> list -> float64 (numpy) -> torch.float64.
```python
import torch
import numpy as np
l = [-0.03010837361216545, -0.035979013890028, -0.016949838027358055]
torch_tensor = torch.tensor(l)
np_array = np.array(l)
np_to_torch = torch.from_numpy(np_array)
print(torch_tensor.dtype)
# torch.float32
print(np_array.dtype)
# float64
print(np_to_torch.dtype)
# torch.float64
```
This might lead to unwanted behaviour. I understand that the whole library is probably built around casting from numpy to other frameworks, so this might be difficult to solve. Perhaps `set_format` should include a `dtypes` option where for each input column the user can specify the wanted precision.
The alternative is that the user needs to cast manually after loading data from the dataset but that does not seem user-friendly, makes the dataset less portable, and might use more space in memory as well as on disk than is actually needed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/625/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/625/timeline | null | completed | null | null | false | [
"Indeed we convert tensors to list to be able to write in arrow format. Because of this conversion we lose the dtype information. We should add the dtype detection when we do type inference. However it would require a bit of refactoring since currently the conversion happens before the type inference..\r\n\r\nAnd then for your information, when reading from arrow format we have to cast from arrow to numpy (which is fast since pyarrow has a numpy integration), and then to torch.\r\n\r\nHowever there's one thing that can help you: we make sure that the dtypes correspond to what is defined in `features`.\r\nTherefore what you can do is provide `features` in `.map(preprocess, feature=...)` to specify the output types.\r\n\r\nFor example in your case:\r\n```python\r\nfrom datasets import Features, Value, Sequence\r\n\r\nfeatures = Features({\r\n \"input_ids\": Sequence(Value(\"int32\")),\r\n \"sembedding\": Sequence(Value(\"float32\"))\r\n})\r\npreprocessed_dataset = dataset.map(preprocess, features=features)\r\n\r\npreprocessed_dataset.set_format(\"torch\", columns=[\"input_ids\", \"sembedding\"])\r\nprint(preprocessed_dataset[0][\"sembedding\"].dtype)\r\n# \"torch.float32\"\r\n```\r\n\r\nLet me know if it helps",
"If the arrow format is basically lists, why is the intermediate step to numpy necessary? I am a bit confused about that part.\r\n\r\nThanks for your suggestion. as I have currently implemented this, I cast to torch.Tensor in my collate_fn to save disk space (so I do not have to save padded tensors to max_len but can pad up to max batch len in collate_fn) at the cost of a bit slower processing. So for me this is not relevant anymore, but I am sure it is for others!",
"I'm glad you managed to figure something out :)\r\n\r\nCasting from arrow to numpy can be 100x faster than casting from arrow to list.\r\nThis is because arrow has an integration with numpy that allows it to instantiate numpy arrays with zero-copy from arrow.\r\nOn the other hand to create python lists it is slow since it has to recreate the list object by iterating through each element in python.",
"Ah that is interesting. I have no direct experience with arrow so I didn't know. ",
"I encountered a simliar issue: `datasets` converted my float numpy array to `torch.float64` tensors, while many pytorch operations require `torch.float32` inputs and it's very troublesome. \r\n\r\nI tried @lhoestq 's solution, but since it's mixed with the preprocess function, it's not very intuitive. \r\n\r\nI just want to share another possible simpler solution: directly cast the dtype of the processed dataset.\r\n\r\nNow I want to change the type of `labels` in `train_dataset` from float64 to float32, I can do this.\r\n\r\n```\r\nfrom datasets import Value, Sequence, Features\r\nfeats = train_dataset.features.copy()\r\nfeats['labels'].feature = Value(dtype='float32')\r\nfeats = Features(feats)\r\ntrain_dataset.cast_(feats)\r\n```\r\n",
"Reopening since @bhavitvyamalik started looking into it !\r\n\r\nAlso I'm posting here a function that could be helpful to support preserving the dtype of tensors.\r\n\r\nIt's used to build a pyarrow array out of a numpy array and:\r\n- it doesn't convert the numpy array to a python list\r\n- it keeps the precision of the numpy array for the pyarrow array\r\n- it works with multidimensional arrays (while `pa.array` can only take a 1D array as input)\r\n- it builds the pyarrow ListArray from offsets created on-the-fly and values that come from the flattened numpy array\r\n\r\n```python\r\nfrom functools import reduce\r\nfrom operator import mul\r\n\r\nimport numpy as np\r\nimport pyarrow as pa\r\n\r\ndef pa_ndarray(a):\r\n \"\"\"Build a PyArrow ListArray from a multidimensional NumPy array\"\"\"\r\n values = pa.array(a.flatten()) \r\n for i in range(a.ndim - 1): \r\n n_offsets = reduce(mul, a.shape[:a.ndim - i - 1], 1) \r\n step_offsets = a.shape[a.ndim - i - 1] \r\n offsets = pa.array(np.arange(n_offsets + 1) * step_offsets, type=pa.int32()) \r\n values = pa.ListArray.from_arrays(offsets, values) \r\n return values \r\n\r\nnarr = np.arange(42).reshape(7, 2, 3).astype(np.uint8)\r\nparr = pa_ndarray(narr)\r\nassert isinstance(parr, pa.Array)\r\nassert parr.type == pa.list_(pa.list_(pa.uint8()))\r\nassert narr.tolist() == parr.to_pylist()\r\n```\r\n\r\nThe only costly operation is the offsets computations. Since it doesn't iterate on the numpy array values this function is pretty fast.",
"@lhoestq Have you thought about this further?\r\n\r\nWe have a use case where we're attempting to load data containing numpy arrays using the `datasets` library.\r\n\r\nWhen using one of the \"standard\" methods (`[Value(...)]` or `Sequence()`) we see ~200 samples processed per second during the call to `_prepare_split`. This slowdown is caused by the vast number of calls to `encode_nested_example` (each sequence is converted to a list, and each element in the sequence...). \r\n\r\nUsing the `Feature` `ArrayND` improves this somewhat to ~500/s as it now uses numpy's `tolist()` rather than iterating over each value in the array and converting them individually.\r\n\r\nHowever, it's still pretty slow and in theory it should be possible to avoid the `numpy -> python -> arrow` dance altogether. To demonstrate this, if you keep the `Feature` set to an `ArrayND` but instead return a `pa_ndarray(...)` in `_generate_examples` it skips the conversion (`return obj, False`) and hits ~11_000/s. Two orders of magnitude speed up! The problem is this then fails later on when the `ArrowWriter` tries to write the examples to disk :-( \r\n\r\nIt would be nice to have first-class support for user-defined PyArrow objects. Is this a possibility? We have _large_ datasets where even an order of magnitude difference is important so settling on the middle ~500/s is less than ideal! \r\n\r\nIs there a workaround for this or another method that should be used instead that gets near-to or equal performance to returning PyArrow arrays?",
"Note that manually generating the table using `pyarrow` achieves ~30_000/s",
"Hi !\r\n\r\nIt would be awesome to achieve this speed for numpy arrays !\r\nFor now we have to use `encode_nested_example` to convert numpy arrays to python lists since pyarrow doesn't support multidimensional numpy arrays (only 1D).\r\n\r\nMaybe let's start a new PR from your PR @bhavitvyamalik (idk why we didn't answer your PR at that time, sorry about that).\r\nBasically the idea is to allow `TypedSequence` to support numpy arrays as you did, and remove the numpy->python casting in `_cast_to_python_objects`.\r\n\r\nThis is really important since we are starting to have a focus on other modalities than text as well (audio, images).\r\n\r\nThough until then @samgd, there is another feature that may interest you and that may give you the speed you want:\r\n\r\nIn a dataset script you can subclass either a GeneratorBasedBuilder (with the `_generate_examples ` method) or an ArrowBasedBuilder if you want. the ArrowBasedBuilder allows to yield arrow data by implementing the `_generate_tables` method (it's the same as `_generate_examples` except you must yield arrow tables). Since the data are already in arrow format, it doesn't call `encode_nested_example`. Let me know if that helps."
] |
https://api.github.com/repos/huggingface/datasets/issues/6039 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6039/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6039/comments | https://api.github.com/repos/huggingface/datasets/issues/6039/events | https://github.com/huggingface/datasets/issues/6039 | 1,806,508,451 | I_kwDODunzps5rrSGj | 6,039 | Loading column subset from parquet file produces error since version 2.13 | [] | closed | false | null | 0 | 2023-07-16T09:13:07Z | 2023-07-24T14:35:04Z | 2023-07-24T14:35:04Z | null | ### Describe the bug
`load_dataset` allows loading a subset of columns from a parquet file with the `columns` argument. Since version 2.13, this produces the following error:
```
Traceback (most recent call last):
File "/usr/lib/python3.10/site-packages/datasets/builder.py", line 1879, in _prepare_split_single
for _, table in generator:
File "/usr/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 68, in _generate_tables
raise ValueError(
ValueError: Tried to load parquet data with columns '['sepal_length']' with mismatching features '{'sepal_length': Value(dtype='float64', id=None), 'sepal_width': Value(dtype='float64', id=None), 'petal_length': Value(dtype='float64', id=None), 'petal_width': Value(dtype='float64', id=None), 'species': Value(dtype='string', id=None)}'
```
This seems to occur because `datasets` is checking whether the columns in the schema exactly match the provided list of columns, instead of whether they are a subset.
### Steps to reproduce the bug
```python
# Prepare some sample data
import pandas as pd
iris = pd.read_csv('https://raw.githubusercontent.com/mwaskom/seaborn-data/master/iris.csv')
iris.to_parquet('iris.parquet')
# ['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species']
print(iris.columns)
# Load data with datasets
from datasets import load_dataset
# Load full parquet file
dataset = load_dataset('parquet', data_files='iris.parquet')
# Load column subset; throws error for datasets>=2.13
dataset = load_dataset('parquet', data_files='iris.parquet', columns=['sepal_length'])
```
### Expected behavior
No error should be thrown and the given column subset should be loaded.
### Environment info
- `datasets` version: 2.13.0
- Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35
- Python version: 3.10.9
- Huggingface_hub version: 0.16.4
- PyArrow version: 12.0.1
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6039/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6039/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5845/comments | https://api.github.com/repos/huggingface/datasets/issues/5845/events | https://github.com/huggingface/datasets/pull/5845 | 1,706,253,251 | PR_kwDODunzps5QUMjS | 5,845 | Add `date_format` param to the CSV reader | [] | closed | false | null | 6 | 2023-05-11T17:29:57Z | 2023-05-15T07:39:13Z | 2023-05-12T15:14:48Z | null | Adds the `date_format` param introduced in Pandas 2.0 to the CSV reader and improves its type hints. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5845/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5845/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5845.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5845",
"merged_at": "2023-05-12T15:14:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5845.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5845"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007592 / 0.011353 (-0.003761) | 0.005223 / 0.011008 (-0.005786) | 0.110218 / 0.038508 (0.071710) | 0.027644 / 0.023109 (0.004534) | 0.335063 / 0.275898 (0.059165) | 0.347102 / 0.323480 (0.023623) | 0.005107 / 0.007986 (-0.002878) | 0.003932 / 0.004328 (-0.000396) | 0.086095 / 0.004250 (0.081845) | 0.034735 / 0.037052 (-0.002317) | 0.329029 / 0.258489 (0.070540) | 0.370282 / 0.293841 (0.076441) | 0.043040 / 0.128546 (-0.085507) | 0.019626 / 0.075646 (-0.056021) | 0.336452 / 0.419271 (-0.082819) | 0.070365 / 0.043533 (0.026832) | 0.326881 / 0.255139 (0.071742) | 0.354984 / 0.283200 (0.071785) | 0.102605 / 0.141683 (-0.039077) | 1.459161 / 1.452155 (0.007007) | 1.453599 / 1.492716 (-0.039117) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201021 / 0.018006 (0.183015) | 0.456415 / 0.000490 (0.455926) | 0.012349 / 0.000200 (0.012149) | 0.000115 / 0.000054 (0.000061) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025199 / 0.037411 (-0.012213) | 0.098536 / 0.014526 (0.084010) | 0.107528 / 0.176557 (-0.069028) | 0.160492 / 0.737135 (-0.576643) | 0.108660 / 0.296338 (-0.187679) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.527020 / 0.215209 (0.311811) | 5.357635 / 2.077655 (3.279980) | 2.062930 / 1.504120 (0.558811) | 1.783009 / 1.541195 (0.241815) | 1.840225 / 1.468490 (0.371735) | 1.074278 / 4.584777 (-3.510499) | 4.710533 / 3.745712 (0.964821) | 2.611202 / 5.269862 (-2.658660) | 1.885487 / 4.565676 (-2.680189) | 0.123201 / 0.424275 (-0.301074) | 0.013880 / 0.007607 (0.006273) | 0.636511 / 0.226044 (0.410467) | 6.516075 / 2.268929 (4.247146) | 2.710138 / 55.444624 (-52.734486) | 2.046606 / 6.876477 (-4.829871) | 2.085907 / 2.142072 (-0.056166) | 1.199489 / 4.805227 (-3.605738) | 0.211668 / 6.500664 (-6.288996) | 0.075436 / 0.075469 (-0.000033) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.219771 / 1.841788 (-0.622016) | 14.276215 / 8.074308 (6.201907) | 16.611529 / 10.191392 (6.420137) | 0.221091 / 0.680424 (-0.459333) | 0.024922 / 0.534201 (-0.509279) | 0.431906 / 0.579283 (-0.147377) | 0.518863 / 0.434364 (0.084499) | 0.515366 / 0.540337 (-0.024971) | 0.640411 / 1.386936 (-0.746525) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007955 / 0.011353 (-0.003398) | 0.004813 / 0.011008 (-0.006196) | 0.076508 / 0.038508 (0.038000) | 0.028137 / 0.023109 (0.005028) | 0.349609 / 0.275898 (0.073711) | 0.403588 / 0.323480 (0.080109) | 0.005456 / 0.007986 (-0.002530) | 0.005677 / 0.004328 (0.001349) | 0.076882 / 0.004250 (0.072632) | 0.039832 / 0.037052 (0.002779) | 0.351930 / 0.258489 (0.093440) | 0.390492 / 0.293841 (0.096651) | 0.045199 / 0.128546 (-0.083347) | 0.023945 / 0.075646 (-0.051701) | 0.091140 / 0.419271 (-0.328132) | 0.057728 / 0.043533 (0.014195) | 0.370663 / 0.255139 (0.115524) | 0.380649 / 0.283200 (0.097449) | 0.097017 / 0.141683 (-0.044666) | 1.362248 / 1.452155 (-0.089907) | 1.445699 / 1.492716 (-0.047018) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204207 / 0.018006 (0.186201) | 0.474471 / 0.000490 (0.473981) | 0.012187 / 0.000200 (0.011987) | 0.000151 / 0.000054 (0.000096) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023123 / 0.037411 (-0.014288) | 0.097547 / 0.014526 (0.083021) | 0.113877 / 0.176557 (-0.062679) | 0.158307 / 0.737135 (-0.578828) | 0.113876 / 0.296338 (-0.182462) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.519920 / 0.215209 (0.304711) | 5.384371 / 2.077655 (3.306716) | 2.263276 / 1.504120 (0.759156) | 1.960604 / 1.541195 (0.419409) | 2.022864 / 1.468490 (0.554374) | 1.015430 / 4.584777 (-3.569347) | 4.774426 / 3.745712 (1.028714) | 4.549598 / 5.269862 (-0.720264) | 2.412638 / 4.565676 (-2.153039) | 0.117983 / 0.424275 (-0.306292) | 0.013340 / 0.007607 (0.005733) | 0.639826 / 0.226044 (0.413782) | 6.491622 / 2.268929 (4.222693) | 2.946892 / 55.444624 (-52.497732) | 2.376393 / 6.876477 (-4.500084) | 2.285592 / 2.142072 (0.143519) | 1.185049 / 4.805227 (-3.620178) | 0.204127 / 6.500664 (-6.296537) | 0.070285 / 0.075469 (-0.005184) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.439736 / 1.841788 (-0.402052) | 14.852087 / 8.074308 (6.777779) | 15.675742 / 10.191392 (5.484350) | 0.206577 / 0.680424 (-0.473846) | 0.031688 / 0.534201 (-0.502513) | 0.471003 / 0.579283 (-0.108280) | 0.505449 / 0.434364 (0.071085) | 0.506114 / 0.540337 (-0.034224) | 0.583752 / 1.386936 (-0.803184) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012965 / 0.011353 (0.001612) | 0.006660 / 0.011008 (-0.004348) | 0.126060 / 0.038508 (0.087551) | 0.041154 / 0.023109 (0.018045) | 0.413428 / 0.275898 (0.137530) | 0.429035 / 0.323480 (0.105555) | 0.006680 / 0.007986 (-0.001305) | 0.005063 / 0.004328 (0.000734) | 0.092161 / 0.004250 (0.087911) | 0.056092 / 0.037052 (0.019039) | 0.421460 / 0.258489 (0.162971) | 0.450291 / 0.293841 (0.156450) | 0.050820 / 0.128546 (-0.077726) | 0.021392 / 0.075646 (-0.054255) | 0.426915 / 0.419271 (0.007643) | 0.064908 / 0.043533 (0.021375) | 0.406769 / 0.255139 (0.151630) | 0.434344 / 0.283200 (0.151144) | 0.127967 / 0.141683 (-0.013716) | 1.922414 / 1.452155 (0.470260) | 1.940717 / 1.492716 (0.448000) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.288024 / 0.018006 (0.270017) | 0.615859 / 0.000490 (0.615369) | 0.007095 / 0.000200 (0.006895) | 0.000160 / 0.000054 (0.000106) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028182 / 0.037411 (-0.009230) | 0.126277 / 0.014526 (0.111752) | 0.131687 / 0.176557 (-0.044870) | 0.206191 / 0.737135 (-0.530944) | 0.141799 / 0.296338 (-0.154539) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.631580 / 0.215209 (0.416371) | 6.141942 / 2.077655 (4.064287) | 2.476721 / 1.504120 (0.972602) | 2.128850 / 1.541195 (0.587655) | 2.236468 / 1.468490 (0.767978) | 1.188665 / 4.584777 (-3.396112) | 5.481179 / 3.745712 (1.735467) | 3.120333 / 5.269862 (-2.149529) | 2.365889 / 4.565676 (-2.199787) | 0.145081 / 0.424275 (-0.279194) | 0.015866 / 0.007607 (0.008259) | 0.795650 / 0.226044 (0.569605) | 7.595289 / 2.268929 (5.326361) | 3.174418 / 55.444624 (-52.270207) | 2.905207 / 6.876477 (-3.971270) | 2.428263 / 2.142072 (0.286191) | 1.408900 / 4.805227 (-3.396328) | 0.265485 / 6.500664 (-6.235179) | 0.083882 / 0.075469 (0.008413) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.517025 / 1.841788 (-0.324762) | 18.110288 / 8.074308 (10.035980) | 20.810003 / 10.191392 (10.618611) | 0.210380 / 0.680424 (-0.470044) | 0.030180 / 0.534201 (-0.504021) | 0.523453 / 0.579283 (-0.055830) | 0.603896 / 0.434364 (0.169532) | 0.622554 / 0.540337 (0.082216) | 0.737973 / 1.386936 (-0.648963) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009795 / 0.011353 (-0.001558) | 0.006269 / 0.011008 (-0.004739) | 0.099938 / 0.038508 (0.061430) | 0.035162 / 0.023109 (0.012052) | 0.506353 / 0.275898 (0.230455) | 0.527804 / 0.323480 (0.204324) | 0.007211 / 0.007986 (-0.000775) | 0.005498 / 0.004328 (0.001169) | 0.098325 / 0.004250 (0.094075) | 0.054513 / 0.037052 (0.017461) | 0.525764 / 0.258489 (0.267274) | 0.576699 / 0.293841 (0.282858) | 0.052800 / 0.128546 (-0.075747) | 0.021192 / 0.075646 (-0.054454) | 0.117676 / 0.419271 (-0.301596) | 0.055415 / 0.043533 (0.011882) | 0.516746 / 0.255139 (0.261607) | 0.528417 / 0.283200 (0.245217) | 0.116947 / 0.141683 (-0.024735) | 1.757864 / 1.452155 (0.305709) | 2.043632 / 1.492716 (0.550916) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.284018 / 0.018006 (0.266011) | 0.595086 / 0.000490 (0.594596) | 0.001945 / 0.000200 (0.001745) | 0.000127 / 0.000054 (0.000073) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032255 / 0.037411 (-0.005157) | 0.128201 / 0.014526 (0.113676) | 0.139189 / 0.176557 (-0.037367) | 0.199750 / 0.737135 (-0.537385) | 0.149406 / 0.296338 (-0.146933) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.652184 / 0.215209 (0.436975) | 6.453319 / 2.077655 (4.375664) | 2.831566 / 1.504120 (1.327446) | 2.453064 / 1.541195 (0.911869) | 2.622056 / 1.468490 (1.153566) | 1.191279 / 4.584777 (-3.393498) | 5.504720 / 3.745712 (1.759007) | 5.916900 / 5.269862 (0.647038) | 2.974400 / 4.565676 (-1.591277) | 0.142851 / 0.424275 (-0.281424) | 0.015241 / 0.007607 (0.007634) | 0.917537 / 0.226044 (0.691493) | 8.277645 / 2.268929 (6.008717) | 3.700495 / 55.444624 (-51.744130) | 3.047127 / 6.876477 (-3.829350) | 3.093216 / 2.142072 (0.951143) | 1.413529 / 4.805227 (-3.391698) | 0.259395 / 6.500664 (-6.241270) | 0.083144 / 0.075469 (0.007675) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.632240 / 1.841788 (-0.209548) | 18.687403 / 8.074308 (10.613095) | 20.134091 / 10.191392 (9.942699) | 0.238792 / 0.680424 (-0.441632) | 0.027645 / 0.534201 (-0.506556) | 0.518200 / 0.579283 (-0.061083) | 0.613535 / 0.434364 (0.179171) | 0.631414 / 0.540337 (0.091076) | 0.724658 / 1.386936 (-0.662278) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006228 / 0.011353 (-0.005125) | 0.004517 / 0.011008 (-0.006492) | 0.097998 / 0.038508 (0.059490) | 0.027903 / 0.023109 (0.004793) | 0.309789 / 0.275898 (0.033891) | 0.332784 / 0.323480 (0.009304) | 0.004757 / 0.007986 (-0.003228) | 0.003348 / 0.004328 (-0.000981) | 0.075193 / 0.004250 (0.070942) | 0.037382 / 0.037052 (0.000330) | 0.306929 / 0.258489 (0.048440) | 0.347304 / 0.293841 (0.053463) | 0.030235 / 0.128546 (-0.098312) | 0.011516 / 0.075646 (-0.064131) | 0.322249 / 0.419271 (-0.097023) | 0.044125 / 0.043533 (0.000592) | 0.303874 / 0.255139 (0.048735) | 0.326808 / 0.283200 (0.043608) | 0.088137 / 0.141683 (-0.053546) | 1.521426 / 1.452155 (0.069272) | 1.573823 / 1.492716 (0.081107) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.203204 / 0.018006 (0.185197) | 0.402247 / 0.000490 (0.401757) | 0.003146 / 0.000200 (0.002946) | 0.000088 / 0.000054 (0.000034) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022955 / 0.037411 (-0.014456) | 0.096059 / 0.014526 (0.081533) | 0.105552 / 0.176557 (-0.071004) | 0.167459 / 0.737135 (-0.569676) | 0.106723 / 0.296338 (-0.189615) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.454626 / 0.215209 (0.239417) | 4.556346 / 2.077655 (2.478691) | 2.220349 / 1.504120 (0.716229) | 2.011820 / 1.541195 (0.470625) | 2.048149 / 1.468490 (0.579659) | 0.697583 / 4.584777 (-3.887194) | 3.428394 / 3.745712 (-0.317318) | 1.863872 / 5.269862 (-3.405989) | 1.159691 / 4.565676 (-3.405985) | 0.082598 / 0.424275 (-0.341677) | 0.012202 / 0.007607 (0.004594) | 0.555617 / 0.226044 (0.329572) | 5.545481 / 2.268929 (3.276553) | 2.650850 / 55.444624 (-52.793775) | 2.305864 / 6.876477 (-4.570613) | 2.392252 / 2.142072 (0.250179) | 0.808512 / 4.805227 (-3.996716) | 0.152086 / 6.500664 (-6.348578) | 0.066440 / 0.075469 (-0.009029) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.211789 / 1.841788 (-0.629999) | 13.515546 / 8.074308 (5.441238) | 13.859870 / 10.191392 (3.668478) | 0.150335 / 0.680424 (-0.530088) | 0.016578 / 0.534201 (-0.517623) | 0.379145 / 0.579283 (-0.200138) | 0.393735 / 0.434364 (-0.040628) | 0.460219 / 0.540337 (-0.080118) | 0.555896 / 1.386936 (-0.831040) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006402 / 0.011353 (-0.004950) | 0.004558 / 0.011008 (-0.006450) | 0.077332 / 0.038508 (0.038824) | 0.027955 / 0.023109 (0.004846) | 0.407877 / 0.275898 (0.131979) | 0.432552 / 0.323480 (0.109072) | 0.004850 / 0.007986 (-0.003135) | 0.003329 / 0.004328 (-0.000999) | 0.075767 / 0.004250 (0.071517) | 0.035940 / 0.037052 (-0.001112) | 0.419544 / 0.258489 (0.161055) | 0.454672 / 0.293841 (0.160831) | 0.030461 / 0.128546 (-0.098085) | 0.011536 / 0.075646 (-0.064111) | 0.085774 / 0.419271 (-0.333498) | 0.039408 / 0.043533 (-0.004125) | 0.389909 / 0.255139 (0.134770) | 0.403287 / 0.283200 (0.120088) | 0.088385 / 0.141683 (-0.053298) | 1.596840 / 1.452155 (0.144686) | 1.659296 / 1.492716 (0.166580) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216349 / 0.018006 (0.198342) | 0.394969 / 0.000490 (0.394479) | 0.000408 / 0.000200 (0.000208) | 0.000059 / 0.000054 (0.000005) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024346 / 0.037411 (-0.013066) | 0.099609 / 0.014526 (0.085084) | 0.106779 / 0.176557 (-0.069778) | 0.156889 / 0.737135 (-0.580247) | 0.110625 / 0.296338 (-0.185714) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.443809 / 0.215209 (0.228600) | 4.450524 / 2.077655 (2.372870) | 2.151694 / 1.504120 (0.647574) | 1.952521 / 1.541195 (0.411326) | 1.963320 / 1.468490 (0.494830) | 0.709291 / 4.584777 (-3.875486) | 3.415708 / 3.745712 (-0.330005) | 1.850498 / 5.269862 (-3.419363) | 1.164355 / 4.565676 (-3.401321) | 0.084977 / 0.424275 (-0.339298) | 0.013284 / 0.007607 (0.005677) | 0.555103 / 0.226044 (0.329059) | 5.583587 / 2.268929 (3.314658) | 2.608754 / 55.444624 (-52.835870) | 2.264079 / 6.876477 (-4.612398) | 2.272455 / 2.142072 (0.130382) | 0.820849 / 4.805227 (-3.984379) | 0.155063 / 6.500664 (-6.345601) | 0.069709 / 0.075469 (-0.005760) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.293285 / 1.841788 (-0.548503) | 14.181867 / 8.074308 (6.107559) | 13.021280 / 10.191392 (2.829888) | 0.130101 / 0.680424 (-0.550323) | 0.016461 / 0.534201 (-0.517740) | 0.383651 / 0.579283 (-0.195632) | 0.387353 / 0.434364 (-0.047011) | 0.443351 / 0.540337 (-0.096986) | 0.529448 / 1.386936 (-0.857488) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007513 / 0.011353 (-0.003840) | 0.005328 / 0.011008 (-0.005680) | 0.096937 / 0.038508 (0.058429) | 0.036230 / 0.023109 (0.013121) | 0.325808 / 0.275898 (0.049910) | 0.363601 / 0.323480 (0.040121) | 0.006130 / 0.007986 (-0.001855) | 0.004352 / 0.004328 (0.000023) | 0.073543 / 0.004250 (0.069293) | 0.054114 / 0.037052 (0.017062) | 0.328952 / 0.258489 (0.070463) | 0.366943 / 0.293841 (0.073102) | 0.035768 / 0.128546 (-0.092778) | 0.012505 / 0.075646 (-0.063142) | 0.332260 / 0.419271 (-0.087012) | 0.066673 / 0.043533 (0.023140) | 0.323866 / 0.255139 (0.068727) | 0.341311 / 0.283200 (0.058112) | 0.129898 / 0.141683 (-0.011785) | 1.456890 / 1.452155 (0.004735) | 1.546933 / 1.492716 (0.054217) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.299236 / 0.018006 (0.281229) | 0.496134 / 0.000490 (0.495645) | 0.004233 / 0.000200 (0.004033) | 0.000081 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028089 / 0.037411 (-0.009322) | 0.104723 / 0.014526 (0.090197) | 0.121032 / 0.176557 (-0.055525) | 0.179916 / 0.737135 (-0.557220) | 0.126628 / 0.296338 (-0.169711) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.403497 / 0.215209 (0.188288) | 4.052481 / 2.077655 (1.974827) | 1.804419 / 1.504120 (0.300299) | 1.619833 / 1.541195 (0.078638) | 1.732438 / 1.468490 (0.263948) | 0.702474 / 4.584777 (-3.882303) | 3.808973 / 3.745712 (0.063261) | 3.682764 / 5.269862 (-1.587098) | 1.919184 / 4.565676 (-2.646493) | 0.086638 / 0.424275 (-0.337637) | 0.012265 / 0.007607 (0.004658) | 0.501273 / 0.226044 (0.275229) | 5.010918 / 2.268929 (2.741989) | 2.278114 / 55.444624 (-53.166510) | 1.942266 / 6.876477 (-4.934211) | 2.101982 / 2.142072 (-0.040091) | 0.847622 / 4.805227 (-3.957606) | 0.172973 / 6.500664 (-6.327691) | 0.066884 / 0.075469 (-0.008586) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.187609 / 1.841788 (-0.654179) | 15.089485 / 8.074308 (7.015177) | 14.787398 / 10.191392 (4.596006) | 0.168254 / 0.680424 (-0.512170) | 0.018266 / 0.534201 (-0.515935) | 0.423204 / 0.579283 (-0.156079) | 0.435238 / 0.434364 (0.000874) | 0.512473 / 0.540337 (-0.027864) | 0.618091 / 1.386936 (-0.768845) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007249 / 0.011353 (-0.004104) | 0.005297 / 0.011008 (-0.005711) | 0.076428 / 0.038508 (0.037920) | 0.033565 / 0.023109 (0.010456) | 0.373756 / 0.275898 (0.097858) | 0.407405 / 0.323480 (0.083925) | 0.006100 / 0.007986 (-0.001886) | 0.006482 / 0.004328 (0.002153) | 0.075884 / 0.004250 (0.071633) | 0.055338 / 0.037052 (0.018286) | 0.378721 / 0.258489 (0.120232) | 0.427065 / 0.293841 (0.133224) | 0.036285 / 0.128546 (-0.092261) | 0.012460 / 0.075646 (-0.063186) | 0.087641 / 0.419271 (-0.331630) | 0.048199 / 0.043533 (0.004666) | 0.386785 / 0.255139 (0.131646) | 0.386702 / 0.283200 (0.103503) | 0.110087 / 0.141683 (-0.031596) | 1.511204 / 1.452155 (0.059050) | 1.585671 / 1.492716 (0.092954) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.313558 / 0.018006 (0.295552) | 0.496991 / 0.000490 (0.496501) | 0.001492 / 0.000200 (0.001292) | 0.000093 / 0.000054 (0.000038) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031814 / 0.037411 (-0.005597) | 0.113486 / 0.014526 (0.098960) | 0.125208 / 0.176557 (-0.051348) | 0.174469 / 0.737135 (-0.562666) | 0.131095 / 0.296338 (-0.165244) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439282 / 0.215209 (0.224073) | 4.362286 / 2.077655 (2.284631) | 2.153271 / 1.504120 (0.649151) | 1.990482 / 1.541195 (0.449288) | 2.103322 / 1.468490 (0.634831) | 0.692522 / 4.584777 (-3.892254) | 3.861931 / 3.745712 (0.116219) | 3.686294 / 5.269862 (-1.583567) | 1.734525 / 4.565676 (-2.831152) | 0.085057 / 0.424275 (-0.339218) | 0.012116 / 0.007607 (0.004509) | 0.547996 / 0.226044 (0.321952) | 5.513835 / 2.268929 (3.244906) | 2.723829 / 55.444624 (-52.720795) | 2.404715 / 6.876477 (-4.471761) | 2.514768 / 2.142072 (0.372696) | 0.834972 / 4.805227 (-3.970255) | 0.168261 / 6.500664 (-6.332403) | 0.066464 / 0.075469 (-0.009005) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.259923 / 1.841788 (-0.581865) | 15.646277 / 8.074308 (7.571969) | 13.097598 / 10.191392 (2.906206) | 0.187991 / 0.680424 (-0.492433) | 0.017358 / 0.534201 (-0.516843) | 0.427979 / 0.579283 (-0.151304) | 0.425747 / 0.434364 (-0.008617) | 0.501907 / 0.540337 (-0.038431) | 0.595106 / 1.386936 (-0.791830) |\n\n</details>\n</details>\n\n\n",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.009378 / 0.011353 (-0.001975) | 0.006434 / 0.011008 (-0.004574) | 0.120603 / 0.038508 (0.082095) | 0.042929 / 0.023109 (0.019820) | 0.366853 / 0.275898 (0.090955) | 0.436795 / 0.323480 (0.113315) | 0.007730 / 0.007986 (-0.000256) | 0.004842 / 0.004328 (0.000513) | 0.091058 / 0.004250 (0.086808) | 0.058256 / 0.037052 (0.021203) | 0.378692 / 0.258489 (0.120203) | 0.467384 / 0.293841 (0.173543) | 0.042948 / 0.128546 (-0.085598) | 0.015172 / 0.075646 (-0.060475) | 0.409225 / 0.419271 (-0.010046) | 0.083672 / 0.043533 (0.040140) | 0.390088 / 0.255139 (0.134949) | 0.406965 / 0.283200 (0.123765) | 0.142132 / 0.141683 (0.000449) | 1.765737 / 1.452155 (0.313582) | 1.895419 / 1.492716 (0.402703) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.244052 / 0.018006 (0.226046) | 0.553383 / 0.000490 (0.552893) | 0.006798 / 0.000200 (0.006598) | 0.000227 / 0.000054 (0.000173) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032032 / 0.037411 (-0.005380) | 0.129990 / 0.014526 (0.115464) | 0.140338 / 0.176557 (-0.036219) | 0.212155 / 0.737135 (-0.524980) | 0.147395 / 0.296338 (-0.148943) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478760 / 0.215209 (0.263551) | 4.751335 / 2.077655 (2.673680) | 2.164755 / 1.504120 (0.660635) | 1.944288 / 1.541195 (0.403094) | 2.077657 / 1.468490 (0.609167) | 0.818519 / 4.584777 (-3.766258) | 4.689013 / 3.745712 (0.943301) | 2.484079 / 5.269862 (-2.785782) | 1.788632 / 4.565676 (-2.777044) | 0.100484 / 0.424275 (-0.323791) | 0.013838 / 0.007607 (0.006231) | 0.589650 / 0.226044 (0.363605) | 5.859461 / 2.268929 (3.590533) | 2.670025 / 55.444624 (-52.774599) | 2.688709 / 6.876477 (-4.187768) | 2.408060 / 2.142072 (0.265988) | 0.972107 / 4.805227 (-3.833120) | 0.194425 / 6.500664 (-6.306239) | 0.076077 / 0.075469 (0.000608) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.430150 / 1.841788 (-0.411638) | 17.710507 / 8.074308 (9.636199) | 16.210789 / 10.191392 (6.019397) | 0.163940 / 0.680424 (-0.516484) | 0.020295 / 0.534201 (-0.513906) | 0.472596 / 0.579283 (-0.106687) | 0.483107 / 0.434364 (0.048743) | 0.585269 / 0.540337 (0.044931) | 0.705526 / 1.386936 (-0.681410) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008864 / 0.011353 (-0.002489) | 0.006095 / 0.011008 (-0.004913) | 0.088702 / 0.038508 (0.050194) | 0.041596 / 0.023109 (0.018486) | 0.453515 / 0.275898 (0.177617) | 0.476217 / 0.323480 (0.152737) | 0.007574 / 0.007986 (-0.000412) | 0.004727 / 0.004328 (0.000398) | 0.087271 / 0.004250 (0.083021) | 0.059631 / 0.037052 (0.022578) | 0.449379 / 0.258489 (0.190890) | 0.494436 / 0.293841 (0.200595) | 0.043448 / 0.128546 (-0.085098) | 0.014580 / 0.075646 (-0.061067) | 0.103836 / 0.419271 (-0.315435) | 0.057537 / 0.043533 (0.014004) | 0.449359 / 0.255139 (0.194220) | 0.447577 / 0.283200 (0.164377) | 0.123600 / 0.141683 (-0.018083) | 1.748448 / 1.452155 (0.296294) | 1.902116 / 1.492716 (0.409399) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237214 / 0.018006 (0.219207) | 0.497648 / 0.000490 (0.497158) | 0.003519 / 0.000200 (0.003319) | 0.000112 / 0.000054 (0.000058) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034477 / 0.037411 (-0.002934) | 0.132627 / 0.014526 (0.118101) | 0.139721 / 0.176557 (-0.036836) | 0.195705 / 0.737135 (-0.541430) | 0.150762 / 0.296338 (-0.145577) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.521306 / 0.215209 (0.306097) | 5.184982 / 2.077655 (3.107328) | 2.503979 / 1.504120 (0.999859) | 2.301054 / 1.541195 (0.759860) | 2.352713 / 1.468490 (0.884222) | 0.819804 / 4.584777 (-3.764973) | 4.584011 / 3.745712 (0.838299) | 2.497311 / 5.269862 (-2.772550) | 1.561262 / 4.565676 (-3.004414) | 0.101814 / 0.424275 (-0.322461) | 0.014078 / 0.007607 (0.006471) | 0.666564 / 0.226044 (0.440520) | 6.616379 / 2.268929 (4.347450) | 3.263892 / 55.444624 (-52.180732) | 2.891774 / 6.876477 (-3.984703) | 2.945260 / 2.142072 (0.803188) | 1.014379 / 4.805227 (-3.790848) | 0.201762 / 6.500664 (-6.298902) | 0.078012 / 0.075469 (0.002543) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.567808 / 1.841788 (-0.273980) | 19.096552 / 8.074308 (11.022244) | 15.522285 / 10.191392 (5.330893) | 0.226568 / 0.680424 (-0.453856) | 0.021078 / 0.534201 (-0.513123) | 0.501686 / 0.579283 (-0.077597) | 0.517575 / 0.434364 (0.083211) | 0.589685 / 0.540337 (0.049348) | 0.705053 / 1.386936 (-0.681883) |\n\n</details>\n</details>\n\n\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/3758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3758/comments | https://api.github.com/repos/huggingface/datasets/issues/3758/events | https://github.com/huggingface/datasets/issues/3758 | 1,143,366,393 | I_kwDODunzps5EJmL5 | 3,758 | head_qa file missing | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-02-18T16:32:43Z | 2022-02-28T14:29:18Z | 2022-02-21T14:39:19Z | null | ## Describe the bug
A file for the `head_qa` dataset is missing (https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t/HEAD_EN/train_HEAD_EN.json)
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("head_qa", name="en")
```
## Expected results
The dataset should be loaded
## Actual results
```
Downloading and preparing dataset head_qa/en (download: 75.69 MiB, generated: 2.69 MiB, post-processed: Unknown size, total: 78.38 MiB) to /home/slesage/.cache/huggingface/datasets/head_qa/en/1.1.0/583ab408e8baf54aab378c93715fadc4d8aa51b393e27c3484a877e2ac0278e9...
Downloading data: 2.21kB [00:00, 2.05MB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/load.py", line 1729, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/builder.py", line 665, in _download_and_prepare
verify_checksums(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/u/0/uc?export=download&id=1a_95N5zQQoUCq8IBNVZgziHbeM-QxG2t']
```
## Environment info
- `datasets` version: 1.18.4.dev0
- Platform: Linux-5.11.0-1028-aws-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3758/timeline | null | completed | null | null | false | [
"We usually find issues with files hosted at Google Drive...\r\n\r\nIn this case we download the Google Drive Virus scan warning instead of the data file.",
"Fixed: https://huggingface.co/datasets/head_qa/viewer/en/train. Thanks\r\n\r\n<img width=\"1551\" alt=\"Capture d’écran 2022-02-28 à 15 29 04\" src=\"https://user-images.githubusercontent.com/1676121/156000224-fd3f62c6-8b54-4df1-8911-bdcb0bac3f1a.png\">\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1883/comments | https://api.github.com/repos/huggingface/datasets/issues/1883/events | https://github.com/huggingface/datasets/pull/1883 | 808,750,623 | MDExOlB1bGxSZXF1ZXN0NTczNzM2NTIz | 1,883 | Add not-in-place implementations for several dataset transforms | [] | closed | false | null | 3 | 2021-02-15T18:44:26Z | 2021-02-24T14:54:49Z | 2021-02-24T14:53:26Z | null | Should we deprecate in-place versions of such methods? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1883/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1883/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1883.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1883",
"merged_at": "2021-02-24T14:53:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1883.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1883"
} | true | [
"@lhoestq I am not sure how to test `dictionary_encode_column` (in-place version was not tested before)",
"I can take a look at dictionary_encode_column tomorrow.\r\nAlthough it's likely that it doesn't work then. It was added at the beginning of the lib and never tested nor used afaik.",
"Now let's update the documentation to use the new methods x)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5989/comments | https://api.github.com/repos/huggingface/datasets/issues/5989/events | https://github.com/huggingface/datasets/issues/5989 | 1,774,134,091 | I_kwDODunzps5pvyNL | 5,989 | Set a rule on the config and split names | [] | open | false | null | 3 | 2023-06-26T07:34:14Z | 2023-07-19T14:22:54Z | null | null | > should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise
https://github.com/huggingface/datasets-server/issues/853
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5989/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5989/timeline | null | null | null | null | false | [
"in this case we need to decide what to do with the existing datasets with white space characters (there shouldn't be a lot of them I think)",
"I imagine that we should stop supporting them, and help the user fix them?",
"See a report where the datasets server fails: https://huggingface.co/datasets/poloclub/diffusiondb/discussions/2#6374ff55b93cbdf65675f564\r\n\r\nThe config name is `random_10k [2m]`!"
] |
https://api.github.com/repos/huggingface/datasets/issues/1696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1696/comments | https://api.github.com/repos/huggingface/datasets/issues/1696/events | https://github.com/huggingface/datasets/issues/1696 | 781,096,918 | MDU6SXNzdWU3ODEwOTY5MTg= | 1,696 | Unable to install datasets | [] | closed | false | null | 4 | 2021-01-07T07:24:37Z | 2021-01-08T00:33:05Z | 2021-01-07T22:06:05Z | null | ** Edit **
I believe there's a bug with the package when you're installing it with Python 3.9. I recommend sticking with previous versions. Thanks, @thomwolf for the insight!
**Short description**
I followed the instructions for installing datasets (https://huggingface.co/docs/datasets/installation.html). However, while I tried to download datasets using `pip install datasets` I got a massive error message after getting stuck at "Installing build dependencies..."
I was wondering if this problem can be fixed by creating a virtual environment, but it didn't help. Can anyone offer some advice on how to fix this issue?
Here's an error message:
`(env) Gas-MacBook-Pro:Downloads destiny$ pip install datasets
Collecting datasets
Using cached datasets-1.2.0-py3-none-any.whl (159 kB)
Collecting numpy>=1.17
Using cached numpy-1.19.5-cp39-cp39-macosx_10_9_x86_64.whl (15.6 MB)
Collecting pyarrow>=0.17.1
Using cached pyarrow-2.0.0.tar.gz (58.9 MB)
....
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ceilf' [-Wincompatible-library-redeclaration]
int ceilf (void);
^
_configtest.c:9:5: note: 'ceilf' is a builtin with type 'float (float)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'rintf' [-Wincompatible-library-redeclaration]
int rintf (void);
^
_configtest.c:10:5: note: 'rintf' is a builtin with type 'float (float)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'truncf' [-Wincompatible-library-redeclaration]
int truncf (void);
^
_configtest.c:11:5: note: 'truncf' is a builtin with type 'float (float)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'sqrtf' [-Wincompatible-library-redeclaration]
int sqrtf (void);
^
_configtest.c:12:5: note: 'sqrtf' is a builtin with type 'float (float)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'log10f' [-Wincompatible-library-redeclaration]
int log10f (void);
^
_configtest.c:13:5: note: 'log10f' is a builtin with type 'float (float)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'logf' [-Wincompatible-library-redeclaration]
int logf (void);
^
_configtest.c:14:5: note: 'logf' is a builtin with type 'float (float)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'log1pf' [-Wincompatible-library-redeclaration]
int log1pf (void);
^
_configtest.c:15:5: note: 'log1pf' is a builtin with type 'float (float)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'expf' [-Wincompatible-library-redeclaration]
int expf (void);
^
_configtest.c:16:5: note: 'expf' is a builtin with type 'float (float)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'expm1f' [-Wincompatible-library-redeclaration]
int expm1f (void);
^
_configtest.c:17:5: note: 'expm1f' is a builtin with type 'float (float)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'asinf' [-Wincompatible-library-redeclaration]
int asinf (void);
^
_configtest.c:18:5: note: 'asinf' is a builtin with type 'float (float)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'acosf' [-Wincompatible-library-redeclaration]
int acosf (void);
^
_configtest.c:19:5: note: 'acosf' is a builtin with type 'float (float)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'atanf' [-Wincompatible-library-redeclaration]
int atanf (void);
^
_configtest.c:20:5: note: 'atanf' is a builtin with type 'float (float)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'asinhf' [-Wincompatible-library-redeclaration]
int asinhf (void);
^
_configtest.c:21:5: note: 'asinhf' is a builtin with type 'float (float)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'acoshf' [-Wincompatible-library-redeclaration]
int acoshf (void);
^
_configtest.c:22:5: note: 'acoshf' is a builtin with type 'float (float)'
_configtest.c:23:5: warning: incompatible redeclaration of library function 'atanhf' [-Wincompatible-library-redeclaration]
int atanhf (void);
^
_configtest.c:23:5: note: 'atanhf' is a builtin with type 'float (float)'
_configtest.c:24:5: warning: incompatible redeclaration of library function 'hypotf' [-Wincompatible-library-redeclaration]
int hypotf (void);
^
_configtest.c:24:5: note: 'hypotf' is a builtin with type 'float (float, float)'
_configtest.c:25:5: warning: incompatible redeclaration of library function 'atan2f' [-Wincompatible-library-redeclaration]
int atan2f (void);
^
_configtest.c:25:5: note: 'atan2f' is a builtin with type 'float (float, float)'
_configtest.c:26:5: warning: incompatible redeclaration of library function 'powf' [-Wincompatible-library-redeclaration]
int powf (void);
^
_configtest.c:26:5: note: 'powf' is a builtin with type 'float (float, float)'
_configtest.c:27:5: warning: incompatible redeclaration of library function 'fmodf' [-Wincompatible-library-redeclaration]
int fmodf (void);
^
_configtest.c:27:5: note: 'fmodf' is a builtin with type 'float (float, float)'
_configtest.c:28:5: warning: incompatible redeclaration of library function 'modff' [-Wincompatible-library-redeclaration]
int modff (void);
^
_configtest.c:28:5: note: 'modff' is a builtin with type 'float (float, float *)'
_configtest.c:29:5: warning: incompatible redeclaration of library function 'frexpf' [-Wincompatible-library-redeclaration]
int frexpf (void);
^
_configtest.c:29:5: note: 'frexpf' is a builtin with type 'float (float, int *)'
_configtest.c:30:5: warning: incompatible redeclaration of library function 'ldexpf' [-Wincompatible-library-redeclaration]
int ldexpf (void);
^
_configtest.c:30:5: note: 'ldexpf' is a builtin with type 'float (float, int)'
_configtest.c:31:5: warning: incompatible redeclaration of library function 'exp2f' [-Wincompatible-library-redeclaration]
int exp2f (void);
^
_configtest.c:31:5: note: 'exp2f' is a builtin with type 'float (float)'
_configtest.c:32:5: warning: incompatible redeclaration of library function 'log2f' [-Wincompatible-library-redeclaration]
int log2f (void);
^
_configtest.c:32:5: note: 'log2f' is a builtin with type 'float (float)'
_configtest.c:33:5: warning: incompatible redeclaration of library function 'copysignf' [-Wincompatible-library-redeclaration]
int copysignf (void);
^
_configtest.c:33:5: note: 'copysignf' is a builtin with type 'float (float, float)'
_configtest.c:34:5: warning: incompatible redeclaration of library function 'nextafterf' [-Wincompatible-library-redeclaration]
int nextafterf (void);
^
_configtest.c:34:5: note: 'nextafterf' is a builtin with type 'float (float, float)'
_configtest.c:35:5: warning: incompatible redeclaration of library function 'cbrtf' [-Wincompatible-library-redeclaration]
int cbrtf (void);
^
_configtest.c:35:5: note: 'cbrtf' is a builtin with type 'float (float)'
35 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'sinl' [-Wincompatible-library-redeclaration]
int sinl (void);
^
_configtest.c:1:5: note: 'sinl' is a builtin with type 'long double (long double)'
_configtest.c:2:5: warning: incompatible redeclaration of library function 'cosl' [-Wincompatible-library-redeclaration]
int cosl (void);
^
_configtest.c:2:5: note: 'cosl' is a builtin with type 'long double (long double)'
_configtest.c:3:5: warning: incompatible redeclaration of library function 'tanl' [-Wincompatible-library-redeclaration]
int tanl (void);
^
_configtest.c:3:5: note: 'tanl' is a builtin with type 'long double (long double)'
_configtest.c:4:5: warning: incompatible redeclaration of library function 'sinhl' [-Wincompatible-library-redeclaration]
int sinhl (void);
^
_configtest.c:4:5: note: 'sinhl' is a builtin with type 'long double (long double)'
_configtest.c:5:5: warning: incompatible redeclaration of library function 'coshl' [-Wincompatible-library-redeclaration]
int coshl (void);
^
_configtest.c:5:5: note: 'coshl' is a builtin with type 'long double (long double)'
_configtest.c:6:5: warning: incompatible redeclaration of library function 'tanhl' [-Wincompatible-library-redeclaration]
int tanhl (void);
^
_configtest.c:6:5: note: 'tanhl' is a builtin with type 'long double (long double)'
_configtest.c:7:5: warning: incompatible redeclaration of library function 'fabsl' [-Wincompatible-library-redeclaration]
int fabsl (void);
^
_configtest.c:7:5: note: 'fabsl' is a builtin with type 'long double (long double)'
_configtest.c:8:5: warning: incompatible redeclaration of library function 'floorl' [-Wincompatible-library-redeclaration]
int floorl (void);
^
_configtest.c:8:5: note: 'floorl' is a builtin with type 'long double (long double)'
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ceill' [-Wincompatible-library-redeclaration]
int ceill (void);
^
_configtest.c:9:5: note: 'ceill' is a builtin with type 'long double (long double)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'rintl' [-Wincompatible-library-redeclaration]
int rintl (void);
^
_configtest.c:10:5: note: 'rintl' is a builtin with type 'long double (long double)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'truncl' [-Wincompatible-library-redeclaration]
int truncl (void);
^
_configtest.c:11:5: note: 'truncl' is a builtin with type 'long double (long double)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'sqrtl' [-Wincompatible-library-redeclaration]
int sqrtl (void);
^
_configtest.c:12:5: note: 'sqrtl' is a builtin with type 'long double (long double)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'log10l' [-Wincompatible-library-redeclaration]
int log10l (void);
^
_configtest.c:13:5: note: 'log10l' is a builtin with type 'long double (long double)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'logl' [-Wincompatible-library-redeclaration]
int logl (void);
^
_configtest.c:14:5: note: 'logl' is a builtin with type 'long double (long double)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'log1pl' [-Wincompatible-library-redeclaration]
int log1pl (void);
^
_configtest.c:15:5: note: 'log1pl' is a builtin with type 'long double (long double)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'expl' [-Wincompatible-library-redeclaration]
int expl (void);
^
_configtest.c:16:5: note: 'expl' is a builtin with type 'long double (long double)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'expm1l' [-Wincompatible-library-redeclaration]
int expm1l (void);
^
_configtest.c:17:5: note: 'expm1l' is a builtin with type 'long double (long double)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'asinl' [-Wincompatible-library-redeclaration]
int asinl (void);
^
_configtest.c:18:5: note: 'asinl' is a builtin with type 'long double (long double)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'acosl' [-Wincompatible-library-redeclaration]
int acosl (void);
^
_configtest.c:19:5: note: 'acosl' is a builtin with type 'long double (long double)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'atanl' [-Wincompatible-library-redeclaration]
int atanl (void);
^
_configtest.c:20:5: note: 'atanl' is a builtin with type 'long double (long double)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'asinhl' [-Wincompatible-library-redeclaration]
int asinhl (void);
^
_configtest.c:21:5: note: 'asinhl' is a builtin with type 'long double (long double)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'acoshl' [-Wincompatible-library-redeclaration]
int acoshl (void);
^
_configtest.c:22:5: note: 'acoshl' is a builtin with type 'long double (long double)'
_configtest.c:23:5: warning: incompatible redeclaration of library function 'atanhl' [-Wincompatible-library-redeclaration]
int atanhl (void);
^
_configtest.c:23:5: note: 'atanhl' is a builtin with type 'long double (long double)'
_configtest.c:24:5: warning: incompatible redeclaration of library function 'hypotl' [-Wincompatible-library-redeclaration]
int hypotl (void);
^
_configtest.c:24:5: note: 'hypotl' is a builtin with type 'long double (long double, long double)'
_configtest.c:25:5: warning: incompatible redeclaration of library function 'atan2l' [-Wincompatible-library-redeclaration]
int atan2l (void);
^
_configtest.c:25:5: note: 'atan2l' is a builtin with type 'long double (long double, long double)'
_configtest.c:26:5: warning: incompatible redeclaration of library function 'powl' [-Wincompatible-library-redeclaration]
int powl (void);
^
_configtest.c:26:5: note: 'powl' is a builtin with type 'long double (long double, long double)'
_configtest.c:27:5: warning: incompatible redeclaration of library function 'fmodl' [-Wincompatible-library-redeclaration]
int fmodl (void);
^
_configtest.c:27:5: note: 'fmodl' is a builtin with type 'long double (long double, long double)'
_configtest.c:28:5: warning: incompatible redeclaration of library function 'modfl' [-Wincompatible-library-redeclaration]
int modfl (void);
^
_configtest.c:28:5: note: 'modfl' is a builtin with type 'long double (long double, long double *)'
_configtest.c:29:5: warning: incompatible redeclaration of library function 'frexpl' [-Wincompatible-library-redeclaration]
int frexpl (void);
^
_configtest.c:29:5: note: 'frexpl' is a builtin with type 'long double (long double, int *)'
_configtest.c:30:5: warning: incompatible redeclaration of library function 'ldexpl' [-Wincompatible-library-redeclaration]
int ldexpl (void);
^
_configtest.c:30:5: note: 'ldexpl' is a builtin with type 'long double (long double, int)'
_configtest.c:31:5: warning: incompatible redeclaration of library function 'exp2l' [-Wincompatible-library-redeclaration]
int exp2l (void);
^
_configtest.c:31:5: note: 'exp2l' is a builtin with type 'long double (long double)'
_configtest.c:32:5: warning: incompatible redeclaration of library function 'log2l' [-Wincompatible-library-redeclaration]
int log2l (void);
^
_configtest.c:32:5: note: 'log2l' is a builtin with type 'long double (long double)'
_configtest.c:33:5: warning: incompatible redeclaration of library function 'copysignl' [-Wincompatible-library-redeclaration]
int copysignl (void);
^
_configtest.c:33:5: note: 'copysignl' is a builtin with type 'long double (long double, long double)'
_configtest.c:34:5: warning: incompatible redeclaration of library function 'nextafterl' [-Wincompatible-library-redeclaration]
int nextafterl (void);
^
_configtest.c:34:5: note: 'nextafterl' is a builtin with type 'long double (long double, long double)'
_configtest.c:35:5: warning: incompatible redeclaration of library function 'cbrtl' [-Wincompatible-library-redeclaration]
int cbrtl (void);
^
_configtest.c:35:5: note: 'cbrtl' is a builtin with type 'long double (long double)'
35 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:8:12: error: use of undeclared identifier 'HAVE_DECL_SIGNBIT'
(void) HAVE_DECL_SIGNBIT;
^
1 error generated.
failure.
removing: _configtest.c _configtest.o
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'cabs' [-Wincompatible-library-redeclaration]
int cabs (void);
^
_configtest.c:1:5: note: 'cabs' is a builtin with type 'double (_Complex double)'
_configtest.c:2:5: warning: incompatible redeclaration of library function 'cacos' [-Wincompatible-library-redeclaration]
int cacos (void);
^
_configtest.c:2:5: note: 'cacos' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:3:5: warning: incompatible redeclaration of library function 'cacosh' [-Wincompatible-library-redeclaration]
int cacosh (void);
^
_configtest.c:3:5: note: 'cacosh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:4:5: warning: incompatible redeclaration of library function 'carg' [-Wincompatible-library-redeclaration]
int carg (void);
^
_configtest.c:4:5: note: 'carg' is a builtin with type 'double (_Complex double)'
_configtest.c:5:5: warning: incompatible redeclaration of library function 'casin' [-Wincompatible-library-redeclaration]
int casin (void);
^
_configtest.c:5:5: note: 'casin' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:6:5: warning: incompatible redeclaration of library function 'casinh' [-Wincompatible-library-redeclaration]
int casinh (void);
^
_configtest.c:6:5: note: 'casinh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:7:5: warning: incompatible redeclaration of library function 'catan' [-Wincompatible-library-redeclaration]
int catan (void);
^
_configtest.c:7:5: note: 'catan' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:8:5: warning: incompatible redeclaration of library function 'catanh' [-Wincompatible-library-redeclaration]
int catanh (void);
^
_configtest.c:8:5: note: 'catanh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ccos' [-Wincompatible-library-redeclaration]
int ccos (void);
^
_configtest.c:9:5: note: 'ccos' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'ccosh' [-Wincompatible-library-redeclaration]
int ccosh (void);
^
_configtest.c:10:5: note: 'ccosh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'cexp' [-Wincompatible-library-redeclaration]
int cexp (void);
^
_configtest.c:11:5: note: 'cexp' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'cimag' [-Wincompatible-library-redeclaration]
int cimag (void);
^
_configtest.c:12:5: note: 'cimag' is a builtin with type 'double (_Complex double)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'clog' [-Wincompatible-library-redeclaration]
int clog (void);
^
_configtest.c:13:5: note: 'clog' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'conj' [-Wincompatible-library-redeclaration]
int conj (void);
^
_configtest.c:14:5: note: 'conj' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'cpow' [-Wincompatible-library-redeclaration]
int cpow (void);
^
_configtest.c:15:5: note: 'cpow' is a builtin with type '_Complex double (_Complex double, _Complex double)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'cproj' [-Wincompatible-library-redeclaration]
int cproj (void);
^
_configtest.c:16:5: note: 'cproj' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'creal' [-Wincompatible-library-redeclaration]
int creal (void);
^
_configtest.c:17:5: note: 'creal' is a builtin with type 'double (_Complex double)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'csin' [-Wincompatible-library-redeclaration]
int csin (void);
^
_configtest.c:18:5: note: 'csin' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'csinh' [-Wincompatible-library-redeclaration]
int csinh (void);
^
_configtest.c:19:5: note: 'csinh' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrt' [-Wincompatible-library-redeclaration]
int csqrt (void);
^
_configtest.c:20:5: note: 'csqrt' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'ctan' [-Wincompatible-library-redeclaration]
int ctan (void);
^
_configtest.c:21:5: note: 'ctan' is a builtin with type '_Complex double (_Complex double)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanh' [-Wincompatible-library-redeclaration]
int ctanh (void);
^
_configtest.c:22:5: note: 'ctanh' is a builtin with type '_Complex double (_Complex double)'
22 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'cabsf' [-Wincompatible-library-redeclaration]
int cabsf (void);
^
_configtest.c:1:5: note: 'cabsf' is a builtin with type 'float (_Complex float)'
_configtest.c:2:5: warning: incompatible redeclaration of library function 'cacosf' [-Wincompatible-library-redeclaration]
int cacosf (void);
^
_configtest.c:2:5: note: 'cacosf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:3:5: warning: incompatible redeclaration of library function 'cacoshf' [-Wincompatible-library-redeclaration]
int cacoshf (void);
^
_configtest.c:3:5: note: 'cacoshf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:4:5: warning: incompatible redeclaration of library function 'cargf' [-Wincompatible-library-redeclaration]
int cargf (void);
^
_configtest.c:4:5: note: 'cargf' is a builtin with type 'float (_Complex float)'
_configtest.c:5:5: warning: incompatible redeclaration of library function 'casinf' [-Wincompatible-library-redeclaration]
int casinf (void);
^
_configtest.c:5:5: note: 'casinf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:6:5: warning: incompatible redeclaration of library function 'casinhf' [-Wincompatible-library-redeclaration]
int casinhf (void);
^
_configtest.c:6:5: note: 'casinhf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:7:5: warning: incompatible redeclaration of library function 'catanf' [-Wincompatible-library-redeclaration]
int catanf (void);
^
_configtest.c:7:5: note: 'catanf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:8:5: warning: incompatible redeclaration of library function 'catanhf' [-Wincompatible-library-redeclaration]
int catanhf (void);
^
_configtest.c:8:5: note: 'catanhf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ccosf' [-Wincompatible-library-redeclaration]
int ccosf (void);
^
_configtest.c:9:5: note: 'ccosf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'ccoshf' [-Wincompatible-library-redeclaration]
int ccoshf (void);
^
_configtest.c:10:5: note: 'ccoshf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'cexpf' [-Wincompatible-library-redeclaration]
int cexpf (void);
^
_configtest.c:11:5: note: 'cexpf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'cimagf' [-Wincompatible-library-redeclaration]
int cimagf (void);
^
_configtest.c:12:5: note: 'cimagf' is a builtin with type 'float (_Complex float)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'clogf' [-Wincompatible-library-redeclaration]
int clogf (void);
^
_configtest.c:13:5: note: 'clogf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'conjf' [-Wincompatible-library-redeclaration]
int conjf (void);
^
_configtest.c:14:5: note: 'conjf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'cpowf' [-Wincompatible-library-redeclaration]
int cpowf (void);
^
_configtest.c:15:5: note: 'cpowf' is a builtin with type '_Complex float (_Complex float, _Complex float)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'cprojf' [-Wincompatible-library-redeclaration]
int cprojf (void);
^
_configtest.c:16:5: note: 'cprojf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'crealf' [-Wincompatible-library-redeclaration]
int crealf (void);
^
_configtest.c:17:5: note: 'crealf' is a builtin with type 'float (_Complex float)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'csinf' [-Wincompatible-library-redeclaration]
int csinf (void);
^
_configtest.c:18:5: note: 'csinf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'csinhf' [-Wincompatible-library-redeclaration]
int csinhf (void);
^
_configtest.c:19:5: note: 'csinhf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrtf' [-Wincompatible-library-redeclaration]
int csqrtf (void);
^
_configtest.c:20:5: note: 'csqrtf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'ctanf' [-Wincompatible-library-redeclaration]
int ctanf (void);
^
_configtest.c:21:5: note: 'ctanf' is a builtin with type '_Complex float (_Complex float)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanhf' [-Wincompatible-library-redeclaration]
int ctanhf (void);
^
_configtest.c:22:5: note: 'ctanhf' is a builtin with type '_Complex float (_Complex float)'
22 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'cabsl' [-Wincompatible-library-redeclaration]
int cabsl (void);
^
_configtest.c:1:5: note: 'cabsl' is a builtin with type 'long double (_Complex long double)'
_configtest.c:2:5: warning: incompatible redeclaration of library function 'cacosl' [-Wincompatible-library-redeclaration]
int cacosl (void);
^
_configtest.c:2:5: note: 'cacosl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:3:5: warning: incompatible redeclaration of library function 'cacoshl' [-Wincompatible-library-redeclaration]
int cacoshl (void);
^
_configtest.c:3:5: note: 'cacoshl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:4:5: warning: incompatible redeclaration of library function 'cargl' [-Wincompatible-library-redeclaration]
int cargl (void);
^
_configtest.c:4:5: note: 'cargl' is a builtin with type 'long double (_Complex long double)'
_configtest.c:5:5: warning: incompatible redeclaration of library function 'casinl' [-Wincompatible-library-redeclaration]
int casinl (void);
^
_configtest.c:5:5: note: 'casinl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:6:5: warning: incompatible redeclaration of library function 'casinhl' [-Wincompatible-library-redeclaration]
int casinhl (void);
^
_configtest.c:6:5: note: 'casinhl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:7:5: warning: incompatible redeclaration of library function 'catanl' [-Wincompatible-library-redeclaration]
int catanl (void);
^
_configtest.c:7:5: note: 'catanl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:8:5: warning: incompatible redeclaration of library function 'catanhl' [-Wincompatible-library-redeclaration]
int catanhl (void);
^
_configtest.c:8:5: note: 'catanhl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:9:5: warning: incompatible redeclaration of library function 'ccosl' [-Wincompatible-library-redeclaration]
int ccosl (void);
^
_configtest.c:9:5: note: 'ccosl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:10:5: warning: incompatible redeclaration of library function 'ccoshl' [-Wincompatible-library-redeclaration]
int ccoshl (void);
^
_configtest.c:10:5: note: 'ccoshl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:11:5: warning: incompatible redeclaration of library function 'cexpl' [-Wincompatible-library-redeclaration]
int cexpl (void);
^
_configtest.c:11:5: note: 'cexpl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:12:5: warning: incompatible redeclaration of library function 'cimagl' [-Wincompatible-library-redeclaration]
int cimagl (void);
^
_configtest.c:12:5: note: 'cimagl' is a builtin with type 'long double (_Complex long double)'
_configtest.c:13:5: warning: incompatible redeclaration of library function 'clogl' [-Wincompatible-library-redeclaration]
int clogl (void);
^
_configtest.c:13:5: note: 'clogl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:14:5: warning: incompatible redeclaration of library function 'conjl' [-Wincompatible-library-redeclaration]
int conjl (void);
^
_configtest.c:14:5: note: 'conjl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:15:5: warning: incompatible redeclaration of library function 'cpowl' [-Wincompatible-library-redeclaration]
int cpowl (void);
^
_configtest.c:15:5: note: 'cpowl' is a builtin with type '_Complex long double (_Complex long double, _Complex long double)'
_configtest.c:16:5: warning: incompatible redeclaration of library function 'cprojl' [-Wincompatible-library-redeclaration]
int cprojl (void);
^
_configtest.c:16:5: note: 'cprojl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:17:5: warning: incompatible redeclaration of library function 'creall' [-Wincompatible-library-redeclaration]
int creall (void);
^
_configtest.c:17:5: note: 'creall' is a builtin with type 'long double (_Complex long double)'
_configtest.c:18:5: warning: incompatible redeclaration of library function 'csinl' [-Wincompatible-library-redeclaration]
int csinl (void);
^
_configtest.c:18:5: note: 'csinl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:19:5: warning: incompatible redeclaration of library function 'csinhl' [-Wincompatible-library-redeclaration]
int csinhl (void);
^
_configtest.c:19:5: note: 'csinhl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:20:5: warning: incompatible redeclaration of library function 'csqrtl' [-Wincompatible-library-redeclaration]
int csqrtl (void);
^
_configtest.c:20:5: note: 'csqrtl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:21:5: warning: incompatible redeclaration of library function 'ctanl' [-Wincompatible-library-redeclaration]
int ctanl (void);
^
_configtest.c:21:5: note: 'ctanl' is a builtin with type '_Complex long double (_Complex long double)'
_configtest.c:22:5: warning: incompatible redeclaration of library function 'ctanhl' [-Wincompatible-library-redeclaration]
int ctanhl (void);
^
_configtest.c:22:5: note: 'ctanhl' is a builtin with type '_Complex long double (_Complex long double)'
22 warnings generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:2:12: warning: unused function 'static_func' [-Wunused-function]
static int static_func (char * restrict a)
^
1 warning generated.
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:3:19: warning: unused function 'static_func' [-Wunused-function]
static inline int static_func (void)
^
1 warning generated.
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
removing: _configtest.c _configtest.o _configtest.o.d
File: build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h
#define SIZEOF_PY_INTPTR_T 8
#define SIZEOF_OFF_T 8
#define SIZEOF_PY_LONG_LONG 8
#define MATHLIB
#define HAVE_SIN 1
#define HAVE_COS 1
#define HAVE_TAN 1
#define HAVE_SINH 1
#define HAVE_COSH 1
#define HAVE_TANH 1
#define HAVE_FABS 1
#define HAVE_FLOOR 1
#define HAVE_CEIL 1
#define HAVE_SQRT 1
#define HAVE_LOG10 1
#define HAVE_LOG 1
#define HAVE_EXP 1
#define HAVE_ASIN 1
#define HAVE_ACOS 1
#define HAVE_ATAN 1
#define HAVE_FMOD 1
#define HAVE_MODF 1
#define HAVE_FREXP 1
#define HAVE_LDEXP 1
#define HAVE_RINT 1
#define HAVE_TRUNC 1
#define HAVE_EXP2 1
#define HAVE_LOG2 1
#define HAVE_ATAN2 1
#define HAVE_POW 1
#define HAVE_NEXTAFTER 1
#define HAVE_STRTOLL 1
#define HAVE_STRTOULL 1
#define HAVE_CBRT 1
#define HAVE_STRTOLD_L 1
#define HAVE_BACKTRACE 1
#define HAVE_MADVISE 1
#define HAVE_XMMINTRIN_H 1
#define HAVE_EMMINTRIN_H 1
#define HAVE_XLOCALE_H 1
#define HAVE_DLFCN_H 1
#define HAVE_SYS_MMAN_H 1
#define HAVE___BUILTIN_ISNAN 1
#define HAVE___BUILTIN_ISINF 1
#define HAVE___BUILTIN_ISFINITE 1
#define HAVE___BUILTIN_BSWAP32 1
#define HAVE___BUILTIN_BSWAP64 1
#define HAVE___BUILTIN_EXPECT 1
#define HAVE___BUILTIN_MUL_OVERFLOW 1
#define HAVE___BUILTIN_CPU_SUPPORTS 1
#define HAVE__M_FROM_INT64 1
#define HAVE__MM_LOAD_PS 1
#define HAVE__MM_PREFETCH 1
#define HAVE__MM_LOAD_PD 1
#define HAVE___BUILTIN_PREFETCH 1
#define HAVE_LINK_AVX 1
#define HAVE_LINK_AVX2 1
#define HAVE_XGETBV 1
#define HAVE_ATTRIBUTE_NONNULL 1
#define HAVE_ATTRIBUTE_TARGET_AVX 1
#define HAVE_ATTRIBUTE_TARGET_AVX2 1
#define HAVE___THREAD 1
#define HAVE_SINF 1
#define HAVE_COSF 1
#define HAVE_TANF 1
#define HAVE_SINHF 1
#define HAVE_COSHF 1
#define HAVE_TANHF 1
#define HAVE_FABSF 1
#define HAVE_FLOORF 1
#define HAVE_CEILF 1
#define HAVE_RINTF 1
#define HAVE_TRUNCF 1
#define HAVE_SQRTF 1
#define HAVE_LOG10F 1
#define HAVE_LOGF 1
#define HAVE_LOG1PF 1
#define HAVE_EXPF 1
#define HAVE_EXPM1F 1
#define HAVE_ASINF 1
#define HAVE_ACOSF 1
#define HAVE_ATANF 1
#define HAVE_ASINHF 1
#define HAVE_ACOSHF 1
#define HAVE_ATANHF 1
#define HAVE_HYPOTF 1
#define HAVE_ATAN2F 1
#define HAVE_POWF 1
#define HAVE_FMODF 1
#define HAVE_MODFF 1
#define HAVE_FREXPF 1
#define HAVE_LDEXPF 1
#define HAVE_EXP2F 1
#define HAVE_LOG2F 1
#define HAVE_COPYSIGNF 1
#define HAVE_NEXTAFTERF 1
#define HAVE_CBRTF 1
#define HAVE_SINL 1
#define HAVE_COSL 1
#define HAVE_TANL 1
#define HAVE_SINHL 1
#define HAVE_COSHL 1
#define HAVE_TANHL 1
#define HAVE_FABSL 1
#define HAVE_FLOORL 1
#define HAVE_CEILL 1
#define HAVE_RINTL 1
#define HAVE_TRUNCL 1
#define HAVE_SQRTL 1
#define HAVE_LOG10L 1
#define HAVE_LOGL 1
#define HAVE_LOG1PL 1
#define HAVE_EXPL 1
#define HAVE_EXPM1L 1
#define HAVE_ASINL 1
#define HAVE_ACOSL 1
#define HAVE_ATANL 1
#define HAVE_ASINHL 1
#define HAVE_ACOSHL 1
#define HAVE_ATANHL 1
#define HAVE_HYPOTL 1
#define HAVE_ATAN2L 1
#define HAVE_POWL 1
#define HAVE_FMODL 1
#define HAVE_MODFL 1
#define HAVE_FREXPL 1
#define HAVE_LDEXPL 1
#define HAVE_EXP2L 1
#define HAVE_LOG2L 1
#define HAVE_COPYSIGNL 1
#define HAVE_NEXTAFTERL 1
#define HAVE_CBRTL 1
#define HAVE_DECL_SIGNBIT
#define HAVE_COMPLEX_H 1
#define HAVE_CABS 1
#define HAVE_CACOS 1
#define HAVE_CACOSH 1
#define HAVE_CARG 1
#define HAVE_CASIN 1
#define HAVE_CASINH 1
#define HAVE_CATAN 1
#define HAVE_CATANH 1
#define HAVE_CCOS 1
#define HAVE_CCOSH 1
#define HAVE_CEXP 1
#define HAVE_CIMAG 1
#define HAVE_CLOG 1
#define HAVE_CONJ 1
#define HAVE_CPOW 1
#define HAVE_CPROJ 1
#define HAVE_CREAL 1
#define HAVE_CSIN 1
#define HAVE_CSINH 1
#define HAVE_CSQRT 1
#define HAVE_CTAN 1
#define HAVE_CTANH 1
#define HAVE_CABSF 1
#define HAVE_CACOSF 1
#define HAVE_CACOSHF 1
#define HAVE_CARGF 1
#define HAVE_CASINF 1
#define HAVE_CASINHF 1
#define HAVE_CATANF 1
#define HAVE_CATANHF 1
#define HAVE_CCOSF 1
#define HAVE_CCOSHF 1
#define HAVE_CEXPF 1
#define HAVE_CIMAGF 1
#define HAVE_CLOGF 1
#define HAVE_CONJF 1
#define HAVE_CPOWF 1
#define HAVE_CPROJF 1
#define HAVE_CREALF 1
#define HAVE_CSINF 1
#define HAVE_CSINHF 1
#define HAVE_CSQRTF 1
#define HAVE_CTANF 1
#define HAVE_CTANHF 1
#define HAVE_CABSL 1
#define HAVE_CACOSL 1
#define HAVE_CACOSHL 1
#define HAVE_CARGL 1
#define HAVE_CASINL 1
#define HAVE_CASINHL 1
#define HAVE_CATANL 1
#define HAVE_CATANHL 1
#define HAVE_CCOSL 1
#define HAVE_CCOSHL 1
#define HAVE_CEXPL 1
#define HAVE_CIMAGL 1
#define HAVE_CLOGL 1
#define HAVE_CONJL 1
#define HAVE_CPOWL 1
#define HAVE_CPROJL 1
#define HAVE_CREALL 1
#define HAVE_CSINL 1
#define HAVE_CSINHL 1
#define HAVE_CSQRTL 1
#define HAVE_CTANL 1
#define HAVE_CTANHL 1
#define NPY_RESTRICT restrict
#define NPY_RELAXED_STRIDES_CHECKING 1
#define HAVE_LDOUBLE_INTEL_EXTENDED_16_BYTES_LE 1
#define NPY_PY3K 1
#ifndef __cplusplus
/* #undef inline */
#endif
#ifndef _NPY_NPY_CONFIG_H_
#error config.h should never be included directly, include npy_config.h instead
#endif
EOF
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h' to sources.
Generating build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
_configtest.c:1:5: warning: incompatible redeclaration of library function 'exp' [-Wincompatible-library-redeclaration]
int exp (void);
^
_configtest.c:1:5: note: 'exp' is a builtin with type 'double (double)'
1 warning generated.
clang _configtest.o -o _configtest
success!
removing: _configtest.c _configtest.o _configtest.o.d _configtest
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -c'
clang: _configtest.c
success!
removing: _configtest.c _configtest.o _configtest.o.d
File: build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h
#define NPY_SIZEOF_SHORT SIZEOF_SHORT
#define NPY_SIZEOF_INT SIZEOF_INT
#define NPY_SIZEOF_LONG SIZEOF_LONG
#define NPY_SIZEOF_FLOAT 4
#define NPY_SIZEOF_COMPLEX_FLOAT 8
#define NPY_SIZEOF_DOUBLE 8
#define NPY_SIZEOF_COMPLEX_DOUBLE 16
#define NPY_SIZEOF_LONGDOUBLE 16
#define NPY_SIZEOF_COMPLEX_LONGDOUBLE 32
#define NPY_SIZEOF_PY_INTPTR_T 8
#define NPY_SIZEOF_OFF_T 8
#define NPY_SIZEOF_PY_LONG_LONG 8
#define NPY_SIZEOF_LONGLONG 8
#define NPY_NO_SMP 0
#define NPY_HAVE_DECL_ISNAN
#define NPY_HAVE_DECL_ISINF
#define NPY_HAVE_DECL_ISFINITE
#define NPY_HAVE_DECL_SIGNBIT
#define NPY_USE_C99_COMPLEX 1
#define NPY_HAVE_COMPLEX_DOUBLE 1
#define NPY_HAVE_COMPLEX_FLOAT 1
#define NPY_HAVE_COMPLEX_LONG_DOUBLE 1
#define NPY_RELAXED_STRIDES_CHECKING 1
#define NPY_USE_C99_FORMATS 1
#define NPY_VISIBILITY_HIDDEN __attribute__((visibility("hidden")))
#define NPY_ABI_VERSION 0x01000009
#define NPY_API_VERSION 0x0000000D
#ifndef __STDC_FORMAT_MACROS
#define __STDC_FORMAT_MACROS 1
#endif
EOF
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h' to sources.
executing numpy/core/code_generators/generate_numpy_api.py
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h' to sources.
numpy.core - nothing done with h_files = ['build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h']
building extension "numpy.core._multiarray_tests" sources
creating build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/_multiarray_tests.c
building extension "numpy.core._multiarray_umath" sources
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h' to sources.
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h' to sources.
executing numpy/core/code_generators/generate_numpy_api.py
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h' to sources.
executing numpy/core/code_generators/generate_ufunc_api.py
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__ufunc_api.h' to sources.
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arraytypes.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/einsum.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/lowlevel_strided_loops.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_templ.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalartypes.c
creating build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/funcs.inc
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath' to include_dirs.
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/simd.inc
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.h
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.h
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.c
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/scalarmath.c
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath' to include_dirs.
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/common/templ_common.h
adding 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/common' to include_dirs.
numpy.core - nothing done with h_files = ['build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/funcs.inc', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/simd.inc', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_internal.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/src/common/templ_common.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/config.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/_numpyconfig.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h', 'build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__ufunc_api.h']
building extension "numpy.core._umath_tests" sources
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_umath_tests.c
building extension "numpy.core._rational_tests" sources
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_rational_tests.c
building extension "numpy.core._struct_ufunc_tests" sources
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_struct_ufunc_tests.c
building extension "numpy.core._operand_flag_tests" sources
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_operand_flag_tests.c
building extension "numpy.fft.fftpack_lite" sources
building extension "numpy.linalg.lapack_lite" sources
creating build/src.macosx-10.15-x86_64-3.9/numpy/linalg
adding 'numpy/linalg/lapack_lite/python_xerbla.c' to sources.
building extension "numpy.linalg._umath_linalg" sources
adding 'numpy/linalg/lapack_lite/python_xerbla.c' to sources.
conv_template:> build/src.macosx-10.15-x86_64-3.9/numpy/linalg/umath_linalg.c
building extension "numpy.random.mtrand" sources
creating build/src.macosx-10.15-x86_64-3.9/numpy/random
building data_files sources
build_src: building npy-pkg config files
running build_py
creating build/lib.macosx-10.15-x86_64-3.9
creating build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/conftest.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/version.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/_globals.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/dual.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/_distributor_init.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/ctypeslib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/matlib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying numpy/_pytesttester.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
copying build/src.macosx-10.15-x86_64-3.9/numpy/__config__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy
creating build/lib.macosx-10.15-x86_64-3.9/numpy/compat
copying numpy/compat/py3k.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat
copying numpy/compat/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat
copying numpy/compat/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat
copying numpy/compat/_inspect.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/compat
creating build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/umath.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/fromnumeric.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_dtype.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_add_newdocs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_methods.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_internal.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_string_helpers.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/multiarray.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/records.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/setup_common.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_aliased_types.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/memmap.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/overrides.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/getlimits.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_dtype_ctypes.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/defchararray.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/shape_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/machar.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/numeric.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/function_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/einsumfunc.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/umath_tests.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/numerictypes.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/_type_aliases.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/cversions.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/arrayprint.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
copying numpy/core/code_generators/generate_numpy_api.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/core
creating build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/unixccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/numpy_distribution.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/conv_template.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/cpuinfo.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/ccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/msvc9compiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/npy_pkg_config.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/compat.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/misc_util.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/log.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/line_endings.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/lib2def.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/pathccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/system_info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/core.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/__version__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/exec_command.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/from_template.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/mingw32ccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/extension.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/msvccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/intelccompiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying numpy/distutils/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
copying build/src.macosx-10.15-x86_64-3.9/numpy/distutils/__config__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils
creating build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/config_compiler.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_ext.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/config.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/install_headers.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_py.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_src.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/sdist.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_scripts.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/bdist_rpm.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/install_clib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/build_clib.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/autodist.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/egg_info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/install.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/develop.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
copying numpy/distutils/command/install_data.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/command
creating build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/gnu.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/compaq.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/intel.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/none.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/nag.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/pg.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/ibm.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/sun.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/lahey.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/g95.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/mips.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/hpux.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/environment.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/pathf95.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/absoft.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
copying numpy/distutils/fcompiler/vast.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/distutils/fcompiler
creating build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/misc.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/internals.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/creation.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/constants.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/ufuncs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/broadcasting.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/basics.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/subclassing.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/indexing.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/byteswapping.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/structured_arrays.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
copying numpy/doc/glossary.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/doc
creating build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/cfuncs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/common_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/crackfortran.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/cb_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/f2py2e.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/func2subr.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/__version__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/diagnose.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/capi_maps.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/f90mod_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/f2py_testing.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/use_rules.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/auxfuncs.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
copying numpy/f2py/__main__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/f2py
creating build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/helper.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/fftpack.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
copying numpy/fft/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/fft
creating build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/_iotools.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/mixins.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/nanfunctions.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/recfunctions.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/histograms.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/scimath.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/_version.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/user_array.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/format.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/twodim_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/financial.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/index_tricks.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/npyio.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/shape_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/stride_tricks.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/utils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/arrayterator.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/function_base.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/arraysetops.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/arraypad.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/type_check.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/polynomial.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/_datasource.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
copying numpy/lib/ufunclike.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/lib
creating build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
copying numpy/linalg/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
copying numpy/linalg/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
copying numpy/linalg/linalg.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
copying numpy/linalg/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/linalg
creating build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/extras.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/version.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/testutils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/core.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/bench.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/timer_comparison.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
copying numpy/ma/mrecords.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/ma
creating build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib
copying numpy/matrixlib/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib
copying numpy/matrixlib/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib
copying numpy/matrixlib/defmatrix.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/matrixlib
creating build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/laguerre.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/_polybase.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/polyutils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/hermite_e.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/chebyshev.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/polynomial.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/legendre.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
copying numpy/polynomial/hermite.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/polynomial
creating build/lib.macosx-10.15-x86_64-3.9/numpy/random
copying numpy/random/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/random
copying numpy/random/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/random
copying numpy/random/info.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/random
creating build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/nosetester.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/noseclasses.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/setup.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/utils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/print_coercion_tables.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
copying numpy/testing/decorators.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing
creating build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/nosetester.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/__init__.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/noseclasses.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/utils.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/parameterized.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
copying numpy/testing/_private/decorators.py -> build/lib.macosx-10.15-x86_64-3.9/numpy/testing/_private
running build_clib
customize UnixCCompiler
customize UnixCCompiler using build_clib
building 'npymath' library
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9
creating build/temp.macosx-10.15-x86_64-3.9/numpy
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/npymath
creating build/temp.macosx-10.15-x86_64-3.9/build
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath
compile options: '-Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: numpy/core/src/npymath/npy_math.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/ieee754.c
clang: numpy/core/src/npymath/halffloat.c
numpy/core/src/npymath/npy_math_complex.c.src:48:33: warning: unused variable 'tiny' [-Wunused-const-variable]
static const volatile npy_float tiny = 3.9443045e-31f;
^
numpy/core/src/npymath/npy_math_complex.c.src:67:25: warning: unused variable 'c_halff' [-Wunused-const-variable]
static const npy_cfloat c_halff = {0.5F, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:25: warning: unused variable 'c_if' [-Wunused-const-variable]
static const npy_cfloat c_if = {0.0, 1.0F};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:25: warning: unused variable 'c_ihalff' [-Wunused-const-variable]
static const npy_cfloat c_ihalff = {0.0, 0.5F};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddf' [-Wunused-function]
caddf(npy_cfloat a, npy_cfloat b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubf' [-Wunused-function]
csubf(npy_cfloat a, npy_cfloat b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegf' [-Wunused-function]
cnegf(npy_cfloat a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulif' [-Wunused-function]
cmulif(npy_cfloat a)
^
numpy/core/src/npymath/npy_math_complex.c.src:67:26: warning: unused variable 'c_half' [-Wunused-const-variable]
static const npy_cdouble c_half = {0.5, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:26: warning: unused variable 'c_i' [-Wunused-const-variable]
static const npy_cdouble c_i = {0.0, 1.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:26: warning: unused variable 'c_ihalf' [-Wunused-const-variable]
static const npy_cdouble c_ihalf = {0.0, 0.5};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'cadd' [-Wunused-function]
cadd(npy_cdouble a, npy_cdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csub' [-Wunused-function]
csub(npy_cdouble a, npy_cdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cneg' [-Wunused-function]
cneg(npy_cdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmuli' [-Wunused-function]
cmuli(npy_cdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:67:30: warning: unused variable 'c_halfl' [-Wunused-const-variable]
static const npy_clongdouble c_halfl = {0.5L, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:30: warning: unused variable 'c_il' [-Wunused-const-variable]
static const npy_clongdouble c_il = {0.0, 1.0L};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:30: warning: unused variable 'c_ihalfl' [-Wunused-const-variable]
static const npy_clongdouble c_ihalfl = {0.0, 0.5L};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddl' [-Wunused-function]
caddl(npy_clongdouble a, npy_clongdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubl' [-Wunused-function]
csubl(npy_clongdouble a, npy_clongdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegl' [-Wunused-function]
cnegl(npy_clongdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulil' [-Wunused-function]
cmulil(npy_clongdouble a)
^
22 warnings generated.
ar: adding 4 object files to build/temp.macosx-10.15-x86_64-3.9/libnpymath.a
ranlib:@ build/temp.macosx-10.15-x86_64-3.9/libnpymath.a
building 'npysort' library
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort
compile options: '-Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/quicksort.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/mergesort.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/heapsort.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/selection.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npysort/binsearch.c
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
numpy/core/src/npysort/selection.c.src:328:9: warning: code will never be executed [-Wunreachable-code]
npy_intp k;
^~~~~~~~~~~
numpy/core/src/npysort/selection.c.src:326:14: note: silence by adding parentheses to mark code as explicitly dead
else if (0 && kth == num - 1) {
^
/* DISABLES CODE */ ( )
22 warnings generated.
ar: adding 5 object files to build/temp.macosx-10.15-x86_64-3.9/libnpysort.a
ranlib:@ build/temp.macosx-10.15-x86_64-3.9/libnpysort.a
running build_ext
customize UnixCCompiler
customize UnixCCompiler using build_ext
building 'numpy.core._dummy' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: numpy/core/src/dummymodule.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/dummymodule.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_dummy.cpython-39-darwin.so
building 'numpy.core._multiarray_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/_multiarray_tests.c
clang: numpy/core/src/common/mem_overlap.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/_multiarray_tests.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/mem_overlap.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -lnpymath -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_multiarray_tests.cpython-39-darwin.so
building 'numpy.core._multiarray_umath' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray
creating build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath
creating build/temp.macosx-10.15-x86_64-3.9/private
creating build/temp.macosx-10.15-x86_64-3.9/private/var
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils
creating build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
extra options: '-msse3 -I/System/Library/Frameworks/vecLib.framework/Headers'
clang: numpy/core/src/multiarray/alloc.c
clang: numpy/core/src/multiarray/calculation.cclang: numpy/core/src/multiarray/array_assign_scalar.c
clang: numpy/core/src/multiarray/convert.c
clang: numpy/core/src/multiarray/ctors.c
clang: numpy/core/src/multiarray/datetime_busday.c
clang: numpy/core/src/multiarray/dragon4.cclang: numpy/core/src/multiarray/flagsobject.c
numpy/core/src/multiarray/ctors.c:2261:36: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/ctors.c:2261:36: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/ctors.c:2261:36: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
if (!(PyUString_Check(name) && PyUString_GET_SIZE(name) == 0)) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
clang: numpy/core/src/multiarray/arrayobject.c
clang: numpy/core/src/multiarray/array_assign_array.c
clang: numpy/core/src/multiarray/convert_datatype.c
clang: numpy/core/src/multiarray/getset.c
clang: numpy/core/src/multiarray/datetime_busdaycal.c
clang: numpy/core/src/multiarray/buffer.c
clang: numpy/core/src/multiarray/compiled_base.c
clang: numpy/core/src/multiarray/hashdescr.c
clang: numpy/core/src/multiarray/descriptor.c
numpy/core/src/multiarray/descriptor.c:453:13: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
if (PyUString_GET_SIZE(name) == 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:453:13: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
if (PyUString_GET_SIZE(name) == 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:453:13: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
if (PyUString_GET_SIZE(name) == 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:460:48: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:460:48: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/descriptor.c:460:48: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
else if (PyUString_Check(title) && PyUString_GET_SIZE(title) > 0) {
^
numpy/core/include/numpy/npy_3kcompat.h:110:28: note: expanded from macro 'PyUString_GET_SIZE'
#define PyUString_GET_SIZE PyUnicode_GET_SIZE
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
clang: numpy/core/src/multiarray/conversion_utils.c
clang: numpy/core/src/multiarray/item_selection.c
clang: numpy/core/src/multiarray/dtype_transfer.c
clang: numpy/core/src/multiarray/mapping.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arraytypes.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_templ.c
3 warnings generated.
clang: numpy/core/src/multiarray/datetime.c
numpy/core/src/multiarray/arraytypes.c.src:477:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
ptr = PyUnicode_AS_UNICODE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/arraytypes.c.src:482:15: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
datalen = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/arraytypes.c.src:482:15: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
datalen = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/arraytypes.c.src:482:15: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
datalen = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
clang: numpy/core/src/multiarray/common.c
numpy/core/src/multiarray/common.c:187:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:187:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:187:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:239:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:239:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:239:28: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
itemsize = PyUnicode_GET_DATA_SIZE(temp);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:282:24: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
int itemsize = PyUnicode_GET_DATA_SIZE(obj);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:282:24: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
int itemsize = PyUnicode_GET_DATA_SIZE(obj);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/common.c:282:24: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
int itemsize = PyUnicode_GET_DATA_SIZE(obj);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
6 warnings generated.
clang: numpy/core/src/multiarray/nditer_pywrap.c
9 warnings generated.
clang: numpy/core/src/multiarray/sequence.c
clang: numpy/core/src/multiarray/shape.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/einsum.c
clang: numpy/core/src/multiarray/methods.c
clang: numpy/core/src/multiarray/iterators.c
clang: numpy/core/src/multiarray/datetime_strings.c
clang: numpy/core/src/multiarray/number.c
clang: numpy/core/src/multiarray/scalarapi.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalartypes.c
numpy/core/src/multiarray/scalarapi.c:74:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
return (void *)PyUnicode_AS_DATA(scalar);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA'
((const char *)(PyUnicode_AS_UNICODE(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalarapi.c:135:28: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
return (void *)PyUnicode_AS_DATA(scalar);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA'
((const char *)(PyUnicode_AS_UNICODE(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalarapi.c:568:29: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
descr->elsize = PyUnicode_GET_DATA_SIZE(sc);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalarapi.c:568:29: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
descr->elsize = PyUnicode_GET_DATA_SIZE(sc);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalarapi.c:568:29: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
descr->elsize = PyUnicode_GET_DATA_SIZE(sc);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:475:17: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
ip = dptr = PyUnicode_AS_UNICODE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:481:11: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]
new = PyUnicode_FromUnicode(ip, len);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:475:17: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
ip = dptr = PyUnicode_AS_UNICODE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:476:11: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
len = PyUnicode_GET_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:481:11: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]
new = PyUnicode_FromUnicode(ip, len);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:1849:18: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
buffer = PyUnicode_AS_DATA(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:283:21: note: expanded from macro 'PyUnicode_AS_DATA'
((const char *)(PyUnicode_AS_UNICODE(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:279:7: note: expanded from macro 'PyUnicode_AS_UNICODE'
PyUnicode_AsUnicode(_PyObject_CAST(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:1850:18: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
buflen = PyUnicode_GET_DATA_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:1850:18: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
buflen = PyUnicode_GET_DATA_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/scalartypes.c.src:1850:18: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
buflen = PyUnicode_GET_DATA_SIZE(self);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:268:6: note: expanded from macro 'PyUnicode_GET_DATA_SIZE'
(PyUnicode_GET_SIZE(op) * Py_UNICODE_SIZE)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
5 warnings generated.
clang: numpy/core/src/multiarray/typeinfo.c
clang: numpy/core/src/multiarray/refcount.c
clang: numpy/core/src/multiarray/usertypes.c
clang: numpy/core/src/multiarray/multiarraymodule.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/lowlevel_strided_loops.c
clang: numpy/core/src/multiarray/vdot.c
clang: numpy/core/src/umath/umathmodule.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.c
clang: numpy/core/src/umath/reduction.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.c
clang: numpy/core/src/multiarray/nditer_api.c
14 warnings generated.
clang: numpy/core/src/multiarray/strfuncs.c
numpy/core/src/umath/loops.c.src:655:18: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations]
result = PyEval_CallObject(tocall, arglist);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject'
PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here
Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/strfuncs.c:178:13: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations]
s = PyEval_CallObject(PyArray_ReprFunction, arglist);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject'
PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here
Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/core/src/multiarray/strfuncs.c:195:13: warning: 'PyEval_CallObjectWithKeywords' is deprecated [-Wdeprecated-declarations]
s = PyEval_CallObject(PyArray_StrFunction, arglist);
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:24:5: note: expanded from macro 'PyEval_CallObject'
PyEval_CallObjectWithKeywords(callable, arg, (PyObject *)NULL)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/ceval.h:17:1: note: 'PyEval_CallObjectWithKeywords' has been explicitly marked deprecated here
Py_DEPRECATED(3.9) PyAPI_FUNC(PyObject *) PyEval_CallObjectWithKeywords(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
2 warnings generated.
clang: numpy/core/src/multiarray/temp_elide.c
clang: numpy/core/src/umath/cpuid.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/scalarmath.c
clang: numpy/core/src/umath/ufunc_object.c
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'byte_long' [-Wunused-function]
byte_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ubyte_long' [-Wunused-function]
ubyte_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'short_long' [-Wunused-function]
short_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ushort_long' [-Wunused-function]
ushort_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'int_long' [-Wunused-function]
int_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'uint_long' [-Wunused-function]
uint_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'long_long' [-Wunused-function]
long_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ulong_long' [-Wunused-function]
ulong_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'longlong_long' [-Wunused-function]
longlong_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'ulonglong_long' [-Wunused-function]
ulonglong_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'half_long' [-Wunused-function]
half_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'float_long' [-Wunused-function]
float_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'double_long' [-Wunused-function]
double_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'longdouble_long' [-Wunused-function]
longdouble_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'cfloat_long' [-Wunused-function]
cfloat_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'cdouble_long' [-Wunused-function]
cdouble_long(PyObject *obj)
^
numpy/core/src/umath/scalarmath.c.src:1449:1: warning: unused function 'clongdouble_long' [-Wunused-function]
clongdouble_long(PyObject *obj)
^
clang: numpy/core/src/multiarray/nditer_constr.c
numpy/core/src/umath/ufunc_object.c:657:19: warning: comparison of integers of different signs: 'int' and 'size_t' (aka 'unsigned long') [-Wsign-compare]
for (i = 0; i < len; i++) {
~ ^ ~~~
clang: numpy/core/src/umath/override.c
clang: numpy/core/src/npymath/npy_math.c
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/ieee754.c
numpy/core/src/umath/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code]
npy_intp n = dimensions[0];
^~~~~~~~~~
numpy/core/src/umath/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead
if (IS_BINARY_REDUCE && 0) {
^
/* DISABLES CODE */ ( )
numpy/core/src/umath/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code]
npy_intp n = dimensions[0];
^~~~~~~~~~
numpy/core/src/umath/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead
if (IS_BINARY_REDUCE && 0) {
^
/* DISABLES CODE */ ( )
numpy/core/src/umath/loops.c.src:2527:22: warning: code will never be executed [-Wunreachable-code]
npy_intp n = dimensions[0];
^~~~~~~~~~
numpy/core/src/umath/loops.c.src:2526:29: note: silence by adding parentheses to mark code as explicitly dead
if (IS_BINARY_REDUCE && 0) {
^
/* DISABLES CODE */ ( )
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.c
numpy/core/src/npymath/npy_math_complex.c.src:48:33: warning: unused variable 'tiny' [-Wunused-const-variable]
static const volatile npy_float tiny = 3.9443045e-31f;
^
numpy/core/src/npymath/npy_math_complex.c.src:67:25: warning: unused variable 'c_halff' [-Wunused-const-variable]
static const npy_cfloat c_halff = {0.5F, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:25: warning: unused variable 'c_if' [-Wunused-const-variable]
static const npy_cfloat c_if = {0.0, 1.0F};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:25: warning: unused variable 'c_ihalff' [-Wunused-const-variable]
static const npy_cfloat c_ihalff = {0.0, 0.5F};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddf' [-Wunused-function]
caddf(npy_cfloat a, npy_cfloat b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubf' [-Wunused-function]
csubf(npy_cfloat a, npy_cfloat b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegf' [-Wunused-function]
cnegf(npy_cfloat a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulif' [-Wunused-function]
cmulif(npy_cfloat a)
^
numpy/core/src/npymath/npy_math_complex.c.src:67:26: warning: unused variable 'c_half' [-Wunused-const-variable]
static const npy_cdouble c_half = {0.5, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:26: warning: unused variable 'c_i' [-Wunused-const-variable]
static const npy_cdouble c_i = {0.0, 1.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:26: warning: unused variable 'c_ihalf' [-Wunused-const-variable]
static const npy_cdouble c_ihalf = {0.0, 0.5};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'cadd' [-Wunused-function]
cadd(npy_cdouble a, npy_cdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csub' [-Wunused-function]
csub(npy_cdouble a, npy_cdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cneg' [-Wunused-function]
cneg(npy_cdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmuli' [-Wunused-function]
cmuli(npy_cdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:67:30: warning: unused variable 'c_halfl' [-Wunused-const-variable]
static const npy_clongdouble c_halfl = {0.5L, 0.0};
^
numpy/core/src/npymath/npy_math_complex.c.src:68:30: warning: unused variable 'c_il' [-Wunused-const-variable]
static const npy_clongdouble c_il = {0.0, 1.0L};
^
numpy/core/src/npymath/npy_math_complex.c.src:69:30: warning: unused variable 'c_ihalfl' [-Wunused-const-variable]
static const npy_clongdouble c_ihalfl = {0.0, 0.5L};
^
numpy/core/src/npymath/npy_math_complex.c.src:79:1: warning: unused function 'caddl' [-Wunused-function]
caddl(npy_clongdouble a, npy_clongdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:87:1: warning: unused function 'csubl' [-Wunused-function]
csubl(npy_clongdouble a, npy_clongdouble b)
^
numpy/core/src/npymath/npy_math_complex.c.src:137:1: warning: unused function 'cnegl' [-Wunused-function]
cnegl(npy_clongdouble a)
^
numpy/core/src/npymath/npy_math_complex.c.src:144:1: warning: unused function 'cmulil' [-Wunused-function]
cmulil(npy_clongdouble a)
^
22 warnings generated.
clang: numpy/core/src/common/mem_overlap.c
clang: numpy/core/src/npymath/halffloat.c
clang: numpy/core/src/common/array_assign.c
clang: numpy/core/src/common/ufunc_override.c
clang: numpy/core/src/common/npy_longdouble.c
clang: numpy/core/src/common/numpyos.c
clang: numpy/core/src/common/ucsnarrow.c
1 warning generated.
clang: numpy/core/src/umath/extobj.c
numpy/core/src/common/ucsnarrow.c:139:34: warning: 'PyUnicode_FromUnicode' is deprecated [-Wdeprecated-declarations]
ret = (PyUnicodeObject *)PyUnicode_FromUnicode((Py_UNICODE*)buf,
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:551:1: note: 'PyUnicode_FromUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(PyObject*) PyUnicode_FromUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
1 warning generated.
clang: numpy/core/src/common/python_xerbla.c
clang: numpy/core/src/common/cblasfuncs.c
clang: /private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src/apple_sgemv_fix.c
In file included from /private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src/apple_sgemv_fix.c:26:
In file included from numpy/core/include/numpy/arrayobject.h:4:
In file included from numpy/core/include/numpy/ndarrayobject.h:21:
build/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy/__multiarray_api.h:1463:1: warning: unused function '_import_array' [-Wunused-function]
_import_array(void)
^
1 warning generated.
17 warnings generated.
clang: numpy/core/src/umath/ufunc_type_resolution.c
4 warnings generated.
4 warnings generated.
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/alloc.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arrayobject.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/arraytypes.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/array_assign_scalar.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/array_assign_array.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/buffer.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/calculation.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/compiled_base.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/common.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/convert.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/convert_datatype.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/conversion_utils.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/ctors.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime_strings.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime_busday.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/datetime_busdaycal.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/descriptor.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/dragon4.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/dtype_transfer.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/einsum.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/flagsobject.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/getset.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/hashdescr.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/item_selection.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/iterators.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/lowlevel_strided_loops.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/mapping.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/methods.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/multiarraymodule.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_templ.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_api.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_constr.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/nditer_pywrap.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/number.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/refcount.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/sequence.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/shape.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalarapi.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/scalartypes.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/strfuncs.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/temp_elide.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/typeinfo.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/usertypes.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/multiarray/vdot.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/umathmodule.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/reduction.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/loops.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/matmul.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/ufunc_object.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/extobj.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/cpuid.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/scalarmath.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/ufunc_type_resolution.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/umath/override.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/ieee754.o build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/npy_math_complex.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/npymath/halffloat.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/array_assign.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/mem_overlap.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/npy_longdouble.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/ucsnarrow.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/ufunc_override.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/numpyos.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/cblasfuncs.o build/temp.macosx-10.15-x86_64-3.9/numpy/core/src/common/python_xerbla.o build/temp.macosx-10.15-x86_64-3.9/private/var/folders/fz/0j719tys48x7jlnjnwc69smr0000gn/T/pip-install-ufzck51l/numpy_b0e8a3953a1d4b46801f12bcea55536e/numpy/_build_utils/src/apple_sgemv_fix.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -lnpymath -lnpysort -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_multiarray_umath.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate
building 'numpy.core._umath_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_umath_tests.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_umath_tests.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_umath_tests.cpython-39-darwin.so
building 'numpy.core._rational_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_rational_tests.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_rational_tests.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_rational_tests.cpython-39-darwin.so
building 'numpy.core._struct_ufunc_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_struct_ufunc_tests.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_struct_ufunc_tests.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_struct_ufunc_tests.cpython-39-darwin.so
building 'numpy.core._operand_flag_tests' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
compile options: '-DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_operand_flag_tests.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/core/src/umath/_operand_flag_tests.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/core/_operand_flag_tests.cpython-39-darwin.so
building 'numpy.fft.fftpack_lite' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/numpy/fft
compile options: '-Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: numpy/fft/fftpack_litemodule.c
clang: numpy/fft/fftpack.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/fft/fftpack_litemodule.o build/temp.macosx-10.15-x86_64-3.9/numpy/fft/fftpack.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/fft/fftpack_lite.cpython-39-darwin.so
building 'numpy.linalg.lapack_lite' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/numpy/linalg
creating build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite
compile options: '-DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
extra options: '-msse3 -I/System/Library/Frameworks/vecLib.framework/Headers'
clang: numpy/linalg/lapack_litemodule.c
clang: numpy/linalg/lapack_lite/python_xerbla.c
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_litemodule.o build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite/python_xerbla.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -o build/lib.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate
building 'numpy.linalg._umath_linalg' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/linalg
compile options: '-DNO_ATLAS_INFO=3 -DHAVE_CBLAS -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
extra options: '-msse3 -I/System/Library/Frameworks/vecLib.framework/Headers'
clang: build/src.macosx-10.15-x86_64-3.9/numpy/linalg/umath_linalg.c
numpy/linalg/umath_linalg.c.src:735:32: warning: unknown warning group '-Wmaybe-uninitialized', ignored [-Wunknown-warning-option]
#pragma GCC diagnostic ignored "-Wmaybe-uninitialized"
^
numpy/linalg/umath_linalg.c.src:541:1: warning: unused function 'dump_ufunc_object' [-Wunused-function]
dump_ufunc_object(PyUFuncObject* ufunc)
^
numpy/linalg/umath_linalg.c.src:566:1: warning: unused function 'dump_linearize_data' [-Wunused-function]
dump_linearize_data(const char* name, const LINEARIZE_DATA_t* params)
^
numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_FLOAT_matrix' [-Wunused-function]
dump_FLOAT_matrix(const char* name,
^
numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_DOUBLE_matrix' [-Wunused-function]
dump_DOUBLE_matrix(const char* name,
^
numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_CFLOAT_matrix' [-Wunused-function]
dump_CFLOAT_matrix(const char* name,
^
numpy/linalg/umath_linalg.c.src:602:1: warning: unused function 'dump_CDOUBLE_matrix' [-Wunused-function]
dump_CDOUBLE_matrix(const char* name,
^
numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_FLOAT_matrix' [-Wunused-function]
zero_FLOAT_matrix(void *dst_in, const LINEARIZE_DATA_t* data)
^
numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_DOUBLE_matrix' [-Wunused-function]
zero_DOUBLE_matrix(void *dst_in, const LINEARIZE_DATA_t* data)
^
numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_CFLOAT_matrix' [-Wunused-function]
zero_CFLOAT_matrix(void *dst_in, const LINEARIZE_DATA_t* data)
^
numpy/linalg/umath_linalg.c.src:865:1: warning: unused function 'zero_CDOUBLE_matrix' [-Wunused-function]
zero_CDOUBLE_matrix(void *dst_in, const LINEARIZE_DATA_t* data)
^
numpy/linalg/umath_linalg.c.src:1862:1: warning: unused function 'dump_geev_params' [-Wunused-function]
dump_geev_params(const char *name, GEEV_PARAMS_t* params)
^
numpy/linalg/umath_linalg.c.src:2132:1: warning: unused function 'init_cgeev' [-Wunused-function]
init_cgeev(GEEV_PARAMS_t* params,
^
numpy/linalg/umath_linalg.c.src:2213:1: warning: unused function 'process_cgeev_results' [-Wunused-function]
process_cgeev_results(GEEV_PARAMS_t *NPY_UNUSED(params))
^
numpy/linalg/umath_linalg.c.src:2376:1: warning: unused function 'dump_gesdd_params' [-Wunused-function]
dump_gesdd_params(const char *name,
^
numpy/linalg/umath_linalg.c.src:2864:1: warning: unused function 'dump_gelsd_params' [-Wunused-function]
dump_gelsd_params(const char *name,
^
16 warnings generated.
clang -bundle -undefined dynamic_lookup -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk build/temp.macosx-10.15-x86_64-3.9/build/src.macosx-10.15-x86_64-3.9/numpy/linalg/umath_linalg.o build/temp.macosx-10.15-x86_64-3.9/numpy/linalg/lapack_lite/python_xerbla.o -L/usr/local/lib -L/usr/local/opt/[email protected]/lib -L/usr/local/opt/sqlite/lib -Lbuild/temp.macosx-10.15-x86_64-3.9 -lnpymath -o build/lib.macosx-10.15-x86_64-3.9/numpy/linalg/_umath_linalg.cpython-39-darwin.so -Wl,-framework -Wl,Accelerate
building 'numpy.random.mtrand' extension
compiling C sources
C compiler: clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers
creating build/temp.macosx-10.15-x86_64-3.9/numpy/random
creating build/temp.macosx-10.15-x86_64-3.9/numpy/random/mtrand
compile options: '-D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c'
clang: numpy/random/mtrand/mtrand.c
clang: numpy/random/mtrand/initarray.cclang: numpy/random/mtrand/randomkit.c
clang: numpy/random/mtrand/distributions.c
numpy/random/mtrand/mtrand.c:40400:34: error: no member named 'tp_print' in 'struct _typeobject'
__pyx_type_6mtrand_RandomState.tp_print = 0;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ^
numpy/random/mtrand/mtrand.c:42673:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:22: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:22: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:52: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42673:52: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**name) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:26: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:26: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:261:7: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op) : \
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:59: warning: 'PyUnicode_AsUnicode' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:262:14: note: expanded from macro 'PyUnicode_GET_SIZE'
((void)PyUnicode_AsUnicode(_PyObject_CAST(op)),\
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:580:1: note: 'PyUnicode_AsUnicode' has been explicitly marked deprecated here
Py_DEPRECATED(3.3) PyAPI_FUNC(Py_UNICODE *) PyUnicode_AsUnicode(
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
numpy/random/mtrand/mtrand.c:42689:59: warning: '_PyUnicode_get_wstr_length' is deprecated [-Wdeprecated-declarations]
(PyUnicode_GET_SIZE(**argname) != PyUnicode_GET_SIZE(key)) ? 1 :
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:264:8: note: expanded from macro 'PyUnicode_GET_SIZE'
PyUnicode_WSTR_LENGTH(op)))
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:451:35: note: expanded from macro 'PyUnicode_WSTR_LENGTH'
#define PyUnicode_WSTR_LENGTH(op) _PyUnicode_get_wstr_length((PyObject*)op)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/cpython/unicodeobject.h:445:1: note: '_PyUnicode_get_wstr_length' has been explicitly marked deprecated here
Py_DEPRECATED(3.3)
^
/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9/pyport.h:508:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
12 warnings and 1 error generated.
error: Command "clang -Wno-unused-result -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -fwrapv -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/usr/include -I/Library/Developer/CommandLineTools/SDKs/MacOSX10.15.sdk/System/Library/Frameworks/Tk.framework/Versions/8.5/Headers -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Inumpy/core/include -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/usr/local/include -I/usr/local/opt/[email protected]/include -I/usr/local/opt/sqlite/include -I/Users/destiny/Downloads/env/include -I/usr/local/Cellar/[email protected]/3.9.0_1/Frameworks/Python.framework/Versions/3.9/include/python3.9 -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/common -Ibuild/src.macosx-10.15-x86_64-3.9/numpy/core/src/npymath -c numpy/random/mtrand/mtrand.c -o build/temp.macosx-10.15-x86_64-3.9/numpy/random/mtrand/mtrand.o -MMD -MF build/temp.macosx-10.15-x86_64-3.9/numpy/random/mtrand/mtrand.o.d" failed with exit status 1 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1696/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1696/timeline | null | completed | null | null | false | [
"Maybe try to create a virtual env with python 3.8 or 3.7",
"Thanks, @thomwolf! I fixed the issue by downgrading python to 3.7. ",
"Damn sorry",
"Damn sorry"
] |
https://api.github.com/repos/huggingface/datasets/issues/4550 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4550/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4550/comments | https://api.github.com/repos/huggingface/datasets/issues/4550/events | https://github.com/huggingface/datasets/issues/4550 | 1,282,374,441 | I_kwDODunzps5Mb3sp | 4,550 | imdb source error | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-06-23T13:02:52Z | 2022-06-23T13:47:05Z | 2022-06-23T13:47:04Z | null | ## Describe the bug
imdb dataset not loading
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("imdb")
```
## Expected results
## Actual results
```bash
06/23/2022 14:45:18 - INFO - datasets.builder - Dataset not on Hf google storage. Downloading and preparing it from source
06/23/2022 14:46:34 - INFO - datasets.utils.file_utils - HEAD request to http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz timed out, retrying... [1.0]
.....
ConnectionError: Couldn't reach http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz (ConnectTimeout(MaxRetryError("HTTPConnectionPool(host='ai.stanford.edu', port=80): Max retries exceeded with url: /~amaas/data/sentiment/aclImdb_v1.tar.gz (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f2d750cf690>, 'Connection to ai.stanford.edu timed out. (connect timeout=100)'))")))
```
## Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4550/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4550/timeline | null | completed | null | null | false | [
"Thanks for reporting, @Muhtasham.\r\n\r\nIndeed IMDB dataset is not accessible from yesterday, because the data is hosted on the data owners servers at Stanford (http://ai.stanford.edu/) and these are down due to a power outage originated by a fire: https://twitter.com/StanfordAILab/status/1539472302399623170?s=20&t=1HU1hrtaXprtn14U61P55w\r\n\r\nAs a temporary workaroud, you can load the IMDB dataset with this tweak:\r\n```python\r\nds = load_dataset(\"imdb\", revision=\"tmp-fix-imdb\")\r\n```\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/2587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2587/comments | https://api.github.com/repos/huggingface/datasets/issues/2587/events | https://github.com/huggingface/datasets/pull/2587 | 936,771,339 | MDExOlB1bGxSZXF1ZXN0NjgzNDI5NjQy | 2,587 | Add aiohttp to tests extras require | [] | closed | false | null | 0 | 2021-07-05T07:14:01Z | 2021-07-05T09:04:38Z | 2021-07-05T09:04:38Z | null | Currently, none of the streaming tests are runned within our CI test suite, because the streaming tests require aiohttp and this is missing from our tests extras require dependencies.
Our CI test suite should be exhaustive and test all the library functionalities. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2587/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2587/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2587.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2587",
"merged_at": "2021-07-05T09:04:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2587.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2587"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2998 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2998/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2998/comments | https://api.github.com/repos/huggingface/datasets/issues/2998/events | https://github.com/huggingface/datasets/issues/2998 | 1,013,372,871 | I_kwDODunzps48ZtfH | 2,998 | cannot shuffle dataset loaded from disk | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 0 | 2021-10-01T13:49:52Z | 2021-10-01T13:49:52Z | null | null | ## Describe the bug
dataset loaded from disk cannot be shuffled.
## Steps to reproduce the bug
```
my_dataset = load_from_disk('s3://my_file/validate', fs=s3)
sample = my_dataset.select(range(100)).shuffle(seed=1234)
```
## Actual results
```
sample = my_dataset .select(range(100)).shuffle(seed=1234)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2494, in shuffle
new_fingerprint=new_fingerprint,
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2303, in select
tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(indices_cache_file_name), delete=False)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/tempfile.py", line 547, in NamedTemporaryFile
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/tempfile.py", line 258, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpnnu5uhnx/my_file/validate/tmpy76d70g4'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Python version: 3.7
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2998/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2998/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2866/comments | https://api.github.com/repos/huggingface/datasets/issues/2866/events | https://github.com/huggingface/datasets/issues/2866 | 986,706,676 | MDU6SXNzdWU5ODY3MDY2NzY= | 2,866 | "counter" dataset raises an error in normal mode, but not in streaming mode | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 11 | 2021-09-02T13:10:53Z | 2021-10-14T09:24:09Z | 2021-10-14T09:24:09Z | null | ## Describe the bug
`counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode.
## Steps to reproduce the bug
```python
>>> import datasets as ds
>>> a = ds.load_dataset('counter', split="train", streaming=False)
Using custom data configuration default
Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split
for key, record in utils.tqdm(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__
for obj in iterable:
File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples
with derived_file.open(encoding="utf-8") as f:
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open
return io.open(self, mode, buffering, encoding, errors, newline,
File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener
return self._accessor.open(self, flags, mode)
FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare
self._download_and_prepare(
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'
```
```python
>>> import datasets as ds
>>> b = ds.load_dataset('counter', split="train", streaming=True)
Using custom data configuration default
>>> list(b)
[]
```
## Expected results
An exception should be raised in streaming mode
## Actual results
No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty.
## Environment info
- `datasets` version: 1.11.1.dev0
- Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29
- Python version: 3.8.11
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2866/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2866/timeline | null | completed | null | null | false | [
"Hi @severo, thanks for reporting.\r\n\r\nJust note that currently not all canonical datasets support streaming mode: this is one case!\r\n\r\nAll datasets that use `pathlib` joins (using `/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet.",
"OK. Do you think it's possible to detect this, and raise an exception (maybe `NotImplementedError`, or a specific `StreamingError`)?",
"We should definitely support datasets using `pathlib` in streaming mode...\r\n\r\nFor non-supported datasets in streaming mode, we have already a request of raising an error/warning: see #2654.",
"Hi @severo, please note that \"counter\" dataset will be streamable (at least until it arrives at the missing file, error already in normal mode) once these PRs are merged:\r\n- #2874\r\n- #2876\r\n- #2880\r\n\r\nI have tested it. 😉 ",
"Now (on master), we get:\r\n\r\n```\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```\r\n\r\n```\r\nUsing custom data configuration default\r\nDownloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9...\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 726, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1124, in _prepare_split\r\n for key, record in utils.tqdm(\r\n File \"/home/slesage/hf/datasets/.venv/lib/python3.8/site-packages/tqdm/std.py\", line 1185, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py\", line 161, in _generate_examples\r\n with derived_file.open(encoding=\"utf-8\") as f:\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py\", line 1222, in open\r\n return io.open(self, mode, buffering, encoding, errors, newline,\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py\", line 1078, in _opener\r\n return self._accessor.open(self, flags, mode)\r\nFileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 728, in _download_and_prepare\r\n raise OSError(\r\nOSError: Cannot find data file.\r\nOriginal error:\r\n[Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml'\r\n```\r\n\r\nThe error is now the same with or without streaming. I close the issue, thanks @albertvillanova and @lhoestq!\r\n",
"Note that we might want to open an issue to fix the \"counter\" dataset by itself, but I let it up to you.",
"Fixed here: https://github.com/huggingface/datasets/pull/2894. Thanks @albertvillanova ",
"On master, I get:\r\n\r\n```python\r\n>>> import datasets as ds\r\n>>> iterable_dataset = ds.load_dataset('counter', split=\"train\", streaming=True)\r\n>>> rows = list(iterable_dataset.take(100))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/iterable_dataset.py\", line 341, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets/src/datasets/iterable_dataset.py\", line 338, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets/src/datasets/iterable_dataset.py\", line 273, in __iter__\r\n yield from islice(self.ex_iterable, self.n)\r\n File \"/home/slesage/hf/datasets/src/datasets/iterable_dataset.py\", line 78, in __iter__\r\n for key, example in self.generate_examples_fn(**self.kwargs):\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/b9e4378dbd3f5ce235d2302e48168c00196e67bbcd13cc7e1f6e69ef82c0cf2a/counter.py\", line 153, in _generate_examples\r\n files = sorted(base_path.glob(r\"[0-9][0-9][0-9][0-9].xml\"))\r\nTypeError: xpathglob() missing 1 required positional argument: 'pattern'\r\n```",
"Associated to the above exception, if I create a test and run it with pytest, I get an awful traceback.\r\n\r\n- create a file `test_counter.py`\r\n\r\n```python\r\nimport pytest\r\nfrom datasets import load_dataset, IterableDataset\r\nfrom typing import Any, cast\r\n\r\n\r\ndef test_counter() -> Any:\r\n iterable_dataset = cast(IterableDataset, load_dataset(\"counter\", split=\"train\", streaming=True))\r\n with pytest.raises(TypeError):\r\n list(iterable_dataset.take(100))\r\n```\r\n\r\n- run the test with pytest\r\n\r\n```bash\r\n$ python -m pytest -x test_counter.py\r\n============================================================================================================================= test session starts ==============================================================================================================================\r\nplatform linux -- Python 3.9.6, pytest-6.2.5, py-1.10.0, pluggy-1.0.0\r\nrootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml\r\nplugins: anyio-3.3.2, cov-2.12.1\r\ncollected 1 item\r\n\r\ntests/test_counter.py . [100%]Traceback (most recent call last):\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/runpy.py\", line 197, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pytest/__main__.py\", line 5, in <module>\r\n raise SystemExit(pytest.console_main())\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/config/__init__.py\", line 185, in console_main\r\n code = main()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/config/__init__.py\", line 162, in main\r\n ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_hooks.py\", line 265, in __call__\r\n return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_manager.py\", line 80, in _hookexec\r\n return self._inner_hookexec(hook_name, methods, kwargs, firstresult)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_callers.py\", line 60, in _multicall\r\n return outcome.get_result()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_result.py\", line 60, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_callers.py\", line 39, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/main.py\", line 316, in pytest_cmdline_main\r\n return wrap_session(config, _main)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/main.py\", line 304, in wrap_session\r\n config.hook.pytest_sessionfinish(\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_hooks.py\", line 265, in __call__\r\n return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_manager.py\", line 80, in _hookexec\r\n return self._inner_hookexec(hook_name, methods, kwargs, firstresult)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_callers.py\", line 55, in _multicall\r\n gen.send(outcome)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/terminal.py\", line 803, in pytest_sessionfinish\r\n outcome.get_result()\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_result.py\", line 60, in get_result\r\n raise ex[1].with_traceback(ex[2])\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/pluggy/_callers.py\", line 39, in _multicall\r\n res = hook_impl.function(*args)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/cacheprovider.py\", line 428, in pytest_sessionfinish\r\n config.cache.set(\"cache/nodeids\", sorted(self.cached_nodeids))\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/_pytest/cacheprovider.py\", line 188, in set\r\n f = path.open(\"w\")\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 199, in xpathopen\r\n return xopen(_as_posix(path), *args, **kwargs)\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py\", line 117, in _as_posix\r\n path_as_posix = path.as_posix()\r\nAttributeError: 'str' object has no attribute 'as_posix'\r\n```\r\n",
"I opened a PR to fix these issues.\r\nAlso in your test you expect a TypeError but I don't know why. On my side it works fine without raising a TypeError",
"I had the issue (TypeError raised) on my branch, but it's fixed now. Thanks"
] |
https://api.github.com/repos/huggingface/datasets/issues/3642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3642/comments | https://api.github.com/repos/huggingface/datasets/issues/3642/events | https://github.com/huggingface/datasets/pull/3642 | 1,116,306,986 | PR_kwDODunzps4xrj2S | 3,642 | Fix dataset slicing with negative bounds when indices mapping is not `None` | [] | closed | false | null | 0 | 2022-01-27T14:45:53Z | 2022-01-27T18:16:23Z | 2022-01-27T18:16:22Z | null | Fix #3611 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3642/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3642/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3642.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3642",
"merged_at": "2022-01-27T18:16:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3642.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3642"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6035/comments | https://api.github.com/repos/huggingface/datasets/issues/6035/events | https://github.com/huggingface/datasets/pull/6035 | 1,805,087,687 | PR_kwDODunzps5Vh_QR | 6,035 | Dataset representation | [] | open | false | null | 1 | 2023-07-14T15:42:37Z | 2023-07-19T19:41:35Z | null | null | __repr__ and _repr_html_ now both are similar to that of Polars | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6035/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6035/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6035.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6035",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6035.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6035"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6035). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/1781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1781/comments | https://api.github.com/repos/huggingface/datasets/issues/1781/events | https://github.com/huggingface/datasets/issues/1781 | 793,914,556 | MDU6SXNzdWU3OTM5MTQ1NTY= | 1,781 | AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import | [] | closed | false | null | 8 | 2021-01-26T04:18:35Z | 2022-10-05T12:37:06Z | 2022-10-05T12:37:06Z | null | I'm using Colab. And suddenly this morning, there is this error. Have a look below!

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1781/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1781/timeline | null | completed | null | null | false | [
"Hi ! I'm not able to reproduce the issue. Can you try restarting your runtime ?\r\n\r\nThe PyExtensionType is available in pyarrow starting 0.17.1 iirc. If restarting your runtime doesn't fix this, can you try updating pyarrow ?\r\n```\r\npip install pyarrow --upgrade\r\n```",
"We should bump up the version test of pyarrow maybe no?\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/src/datasets/__init__.py#L60",
"Yes indeed.\r\n\r\nAlso it looks like Pyarrow 3.0.0 got released on pypi 10 hours ago. This might be related to the bug, I'll investigate\r\nEDIT: looks like the 3.0.0 release doesn't have unexpected breaking changes for us, so I don't think the issue comes from that",
"Maybe colab moved to pyarrow 0.16 by default (instead of 0.14 before)?",
"Installing datasets installs pyarrow>=0.17.1 so in theory it doesn't matter which version of pyarrow colab has by default (which is currently pyarrow 0.14.1).\r\n\r\nAlso now the colab runtime refresh the pyarrow version automatically after the update from pip (previously you needed to restart your runtime).\r\n\r\nI guess what happened is that Colab didn't refresh pyarrow for some reason, and the AttributeError was raised *before* the pyarrow version check from `datasets` at https://github.com/huggingface/datasets/blob/master/src/datasets/__init__.py#L60",
"Yes colab doesn’t reload preloaded library unless you restart the instance. Maybe we should move the check on top of the init ",
"Yes I'll do that :)",
"I updated the pyarrow version check in #1782"
] |
https://api.github.com/repos/huggingface/datasets/issues/398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/398/comments | https://api.github.com/repos/huggingface/datasets/issues/398/events | https://github.com/huggingface/datasets/pull/398 | 657,511,962 | MDExOlB1bGxSZXF1ZXN0NDQ5NjE1OTk1 | 398 | Add inline links | [] | closed | false | null | 2 | 2020-07-15T17:04:04Z | 2020-07-22T10:14:22Z | 2020-07-22T10:14:22Z | null | Add inline links to `Contributing.md` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/398/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/398/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/398.diff",
"html_url": "https://github.com/huggingface/datasets/pull/398",
"merged_at": "2020-07-22T10:14:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/398.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/398"
} | true | [
"Do you mind adding a link to the much more extended pages on adding and sharing a dataset in the new documentation?",
"Sure, I will do that too"
] |
https://api.github.com/repos/huggingface/datasets/issues/1946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1946/comments | https://api.github.com/repos/huggingface/datasets/issues/1946/events | https://github.com/huggingface/datasets/pull/1946 | 816,526,294 | MDExOlB1bGxSZXF1ZXN0NTgwMTcyNzI2 | 1,946 | Implement Dataset from CSV | [] | closed | false | null | 3 | 2021-02-25T15:10:13Z | 2021-03-12T09:42:48Z | 2021-03-12T09:42:48Z | null | Implement `Dataset.from_csv`.
Analogue to #1943.
If finally, the scripts should be used instead, at least we can reuse the tests here. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1946/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1946/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1946.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1946",
"merged_at": "2021-03-12T09:42:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1946.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1946"
} | true | [
"@lhoestq question about public API: `keep_in_memory` or just `in_memory`?",
"For consistence I'd say `keep_in_memory`, but no strong opinion.",
"@lhoestq done!"
] |
https://api.github.com/repos/huggingface/datasets/issues/812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/812/comments | https://api.github.com/repos/huggingface/datasets/issues/812/events | https://github.com/huggingface/datasets/issues/812 | 738,340,217 | MDU6SXNzdWU3MzgzNDAyMTc= | 812 | Too much logging | [] | closed | false | null | 7 | 2020-11-07T23:56:30Z | 2021-01-26T14:31:34Z | 2020-11-16T17:06:42Z | null | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock
[2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock
using datasets version = 1.1.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/812/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/812/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting :) \r\nI agree these one should be hidden when the logging level is warning, we'll fix that",
"+1, the amount of logging is excessive.\r\n\r\nMost of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these messages were logged after I already called `datasets.logging.set_verbosity_error()`)\r\n\r\n```\r\nI1109 21:26:01.742688 139785006901056 filelock.py:318] Lock 139778216292192 released on /home/kitaev/.cache/huggingface/datasets/9ed4f2e133395826175a892c70611f68522c7bc61a35476e8b51a31afb76e4bf.e6f3e3f3e3875a07469d1cfd32e16e1d06b149616b11eef2d081c43d515b492d.py.lock\r\nI1109 21:26:01.747898 139785006901056 filelock.py:274] Lock 139778216290176 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748258 139785006901056 filelock.py:318] Lock 139778216290176 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748412 139785006901056 filelock.py:274] Lock 139778215853024 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:26:01.748497 139785006901056 filelock.py:318] Lock 139778215853024 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock\r\nI1109 21:07:17.029001 140301730502464 filelock.py:274] Lock 140289479304360 acquired on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock\r\nI1109 21:07:17.029341 140301730502464 filelock.py:318] Lock 140289479304360 released on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock\r\nI1109 21:07:17.058964 140301730502464 filelock.py:274] Lock 140251889388120 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.060933 140301730502464 filelock.py:318] Lock 140251889388120 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.061067 140301730502464 filelock.py:274] Lock 140296072521488 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\nI1109 21:07:17.069736 140301730502464 metric.py:400] Removing /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow\r\nI1109 21:07:17.069949 140301730502464 filelock.py:318] Lock 140296072521488 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock\r\n```",
"So how to solve this problem?",
"In the latest version of the lib the logs about locks are at the DEBUG level so you won't see them by default.\r\nAlso `set_verbosity_warning` does take into account these logs now.\r\nCan you try to update the lib ?\r\n```\r\npip install --upgrade datasets\r\n```",
"Thanks. For some reason I have to use the older version. Is that possible I can fix this by some surface-level trick?\r\n\r\nI'm still using 1.13 version datasets.",
"On older versions you can use\r\n```python\r\nimport logging\r\n\r\nlogging.getLogger(\"filelock\").setLevel(logging.WARNING)\r\n```",
"Whoa Thank you! It works!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5070 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5070/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5070/comments | https://api.github.com/repos/huggingface/datasets/issues/5070/events | https://github.com/huggingface/datasets/issues/5070 | 1,396,765,647 | I_kwDODunzps5TQPPP | 5,070 | Support default config name when no builder configs | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 1 | 2022-10-04T19:49:35Z | 2022-10-06T14:40:26Z | 2022-10-06T14:40:26Z | null | **Is your feature request related to a problem? Please describe.**
As discussed with @stas00, we could support defining a default config name, even if no predefined allowed config names are set. That is, support `DEFAULT_CONFIG_NAME`, even when `BUILDER_CONFIGS` is not defined.
**Additional context**
In order to support creating configs on the fly **by name** (not using kwargs), the list of allowed builder configs `BUILDER_CONFIGS` must not be set.
However, if so, then `DEFAULT_CONFIG_NAME` is not supported.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5070/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5070/timeline | null | completed | null | null | false | [
"Thank you for creating this feature request, Albert.\r\n\r\nFor context this is the datatest where Albert has been helping me to switch to on-the-fly split config https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing\r\n\r\nand the attempt to switch on-the-fly splits was here: https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing/discussions/2/files\r\n\r\nbut which I had to revert since providing no split breaks at run time.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/5670 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5670/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5670/comments | https://api.github.com/repos/huggingface/datasets/issues/5670/events | https://github.com/huggingface/datasets/issues/5670 | 1,640,607,045 | I_kwDODunzps5hya1F | 5,670 | Unable to load multi class classification datasets | [] | closed | false | null | 2 | 2023-03-25T18:06:15Z | 2023-03-27T22:54:56Z | 2023-03-27T22:54:56Z | null | ### Describe the bug
I've been playing around with huggingface library, mostly with `datasets` and wanted to download the multi class classification datasets to fine tune BERT on this task. ([link](https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer)).
While loading the dataset, I'm getting the following error snippet.
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[44], line 3
1 from datasets import load_dataset
----> 3 imdb_dataset = load_dataset("yelp_review_full")
4 imdb_dataset
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1719, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1716 ignore_verifications = ignore_verifications or save_infos
1718 # Create a dataset builder
-> 1719 builder_instance = load_dataset_builder(
1720 path=path,
1721 name=name,
1722 data_dir=data_dir,
1723 data_files=data_files,
1724 cache_dir=cache_dir,
1725 features=features,
1726 download_config=download_config,
1727 download_mode=download_mode,
1728 revision=revision,
1729 use_auth_token=use_auth_token,
1730 **config_kwargs,
1731 )
1733 # Return iterable dataset in case of streaming
1734 if streaming:
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/load.py:1523, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1520 raise ValueError(error_msg)
1522 # Instantiate the dataset builder
-> 1523 builder_instance: DatasetBuilder = builder_cls(
1524 cache_dir=cache_dir,
1525 config_name=config_name,
1526 data_dir=data_dir,
1527 data_files=data_files,
1528 hash=hash,
1529 features=features,
1530 use_auth_token=use_auth_token,
1531 **builder_kwargs,
1532 **config_kwargs,
1533 )
1535 return builder_instance
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:1292, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs)
1291 def __init__(self, *args, writer_batch_size=None, **kwargs):
-> 1292 super().__init__(*args, **kwargs)
1293 # Batch size used by the ArrowWriter
1294 # It defines the number of samples that are kept in memory before writing them
1295 # and also the length of the arrow chunks
1296 # None means that the ArrowWriter will use its default value
1297 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:312, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs)
309 # prepare info: DatasetInfo are a standardized dataclass across all datasets
310 # Prefill datasetinfo
311 if info is None:
--> 312 info = self.get_exported_dataset_info()
313 info.update(self._info())
314 info.builder_name = self.name
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:412, in DatasetBuilder.get_exported_dataset_info(self)
400 def get_exported_dataset_info(self) -> DatasetInfo:
401 """Empty DatasetInfo if doesn't exist
402
403 Example:
(...)
410 ```
411 """
--> 412 return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo())
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/builder.py:398, in DatasetBuilder.get_all_exported_dataset_infos(cls)
385 @classmethod
386 def get_all_exported_dataset_infos(cls) -> DatasetInfosDict:
387 """Empty dict if doesn't exist
388
389 Example:
(...)
396 ```
397 """
--> 398 return DatasetInfosDict.from_directory(cls.get_imported_module_dir())
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:370, in DatasetInfosDict.from_directory(cls, dataset_infos_dir)
368 dataset_metadata = DatasetMetadata.from_readme(Path(dataset_infos_dir) / "README.md")
369 if "dataset_info" in dataset_metadata:
--> 370 return cls.from_metadata(dataset_metadata)
371 if os.path.exists(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME)):
372 # this is just to have backward compatibility with dataset_infos.json files
373 with open(os.path.join(dataset_infos_dir, config.DATASETDICT_INFOS_FILENAME), encoding="utf-8") as f:
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:396, in DatasetInfosDict.from_metadata(cls, dataset_metadata)
387 return cls(
388 {
389 dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict(
(...)
393 }
394 )
395 else:
--> 396 dataset_info = DatasetInfo._from_yaml_dict(dataset_metadata["dataset_info"])
397 dataset_info.config_name = dataset_metadata["dataset_info"].get("config_name", "default")
398 return cls({dataset_info.config_name: dataset_info})
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/info.py:332, in DatasetInfo._from_yaml_dict(cls, yaml_data)
330 yaml_data = copy.deepcopy(yaml_data)
331 if yaml_data.get("features") is not None:
--> 332 yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
333 if yaml_data.get("splits") is not None:
334 yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"])
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1745, in Features._from_yaml_list(cls, yaml_data)
1742 else:
1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
-> 1745 return cls.from_dict(from_yaml_inner(yaml_data))
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in Features._from_yaml_list.<locals>.from_yaml_inner(obj)
1739 elif isinstance(obj, list):
1740 names = [_feature.pop("name") for _feature in obj]
-> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
1742 else:
1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1741, in <dictcomp>(.0)
1739 elif isinstance(obj, list):
1740 names = [_feature.pop("name") for _feature in obj]
-> 1741 return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
1742 else:
1743 raise TypeError(f"Expected a dict or a list but got {type(obj)}: {obj}")
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1736, in Features._from_yaml_list.<locals>.from_yaml_inner(obj)
1734 return {"_type": snakecase_to_camelcase(obj["dtype"])}
1735 else:
-> 1736 return from_yaml_inner(obj["dtype"])
1737 else:
1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]}
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1738, in Features._from_yaml_list.<locals>.from_yaml_inner(obj)
1736 return from_yaml_inner(obj["dtype"])
1737 else:
-> 1738 return {"_type": snakecase_to_camelcase(_type), **unsimplify(obj)[_type]}
1739 elif isinstance(obj, list):
1740 names = [_feature.pop("name") for _feature in obj]
File /work/pi_adrozdov_umass_edu/syerawar_umass_edu/envs/vadops/lib/python3.10/site-packages/datasets/features/features.py:1706, in Features._from_yaml_list.<locals>.unsimplify(feature)
1704 if isinstance(feature.get("class_label"), dict) and isinstance(feature["class_label"].get("names"), dict):
1705 label_ids = sorted(feature["class_label"]["names"])
-> 1706 if label_ids and label_ids != list(range(label_ids[-1] + 1)):
1707 raise ValueError(
1708 f"ClassLabel expected a value for all label ids [0:{label_ids[-1] + 1}] but some ids are missing."
1709 )
1710 feature["class_label"]["names"] = [feature["class_label"]["names"][label_id] for label_id in label_ids]
TypeError: can only concatenate str (not "int") to str
```
The same issue happens when I try to load `go-emotions` multi class classification dataset. Could somebody guide me on how to fix this issue?
### Steps to reproduce the bug
Run the following code snippet in a python script/ notebook cell:
```
from datasets import load_dataset
yelp_dataset = load_dataset("yelp_review_full")
yelp_dataset
```
### Expected behavior
The dataset should be loaded perfectly, which showing the train, test and unsupervised splits with the basic data statistics
### Environment info
- `datasets` version: 2.6.1
- Platform: Linux-5.4.0-124-generic-x86_64-with-glibc2.31
- Python version: 3.10.9
- PyArrow version: 8.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5670/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5670/timeline | null | completed | null | null | false | [
"Hi ! This sounds related to https://github.com/huggingface/datasets/issues/5406\r\n\r\nUpdating `datasets` fixes the issue ;)",
"Thanks @lhoestq!\r\n\r\nI'll close this issue now."
] |
https://api.github.com/repos/huggingface/datasets/issues/3707 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3707/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3707/comments | https://api.github.com/repos/huggingface/datasets/issues/3707/events | https://github.com/huggingface/datasets/issues/3707 | 1,132,741,903 | I_kwDODunzps5DhEUP | 3,707 | `.select`: unexpected behavior with `indices` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-02-11T15:20:01Z | 2022-02-14T19:19:21Z | 2022-02-14T19:19:21Z | null | ## Describe the bug
The `.select` method will not throw when sending `indices` bigger than the dataset length; `indices` will be wrapped instead. This behavior is not documented anywhere, and is not intuitive.
## Steps to reproduce the bug
```python
from datasets import Dataset
ds = Dataset.from_dict({"text": ["d", "e", "f"], "label": [4, 5, 6]})
res1 = ds.select([1, 2, 3])['text']
res2 = ds.select([1000])['text']
```
## Expected results
Both results should throw an `Error`.
## Actual results
`res1` will give `['e', 'f', 'd']`
`res2` will give `['e']`
## Environment info
Bug found from this environment:
- `datasets` version: 1.16.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.7
- PyArrow version: 6.0.1
It was also replicated on `master`.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3707/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3707/timeline | null | completed | null | null | false | [
"Hi! Currently, we compute the final index as `index % len(dset)`. I agree this behavior is somewhat unexpected and that it would be more appropriate to raise an error instead (this is what `df.iloc` in Pandas does, for instance).\r\n\r\n@albertvillanova @lhoestq wdyt?",
"I agree. I think `index % len(dset)` was used to support negative indices.\r\n\r\nI think this needs to be fixed in `datasets.formatting.formatting._check_valid_index_key` if I'm not mistaken"
] |
https://api.github.com/repos/huggingface/datasets/issues/2488 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2488/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2488/comments | https://api.github.com/repos/huggingface/datasets/issues/2488/events | https://github.com/huggingface/datasets/pull/2488 | 919,500,756 | MDExOlB1bGxSZXF1ZXN0NjY4ODIwNDA1 | 2,488 | Set configurable downloaded datasets path | [] | closed | false | {
"closed_at": "2021-07-09T05:50:07Z",
"closed_issues": 12,
"created_at": "2021-05-31T16:13:06Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-07-08T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/5",
"id": 6808903,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels",
"node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==",
"number": 5,
"open_issues": 0,
"state": "closed",
"title": "1.9",
"updated_at": "2021-07-12T14:12:00Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/5"
} | 0 | 2021-06-12T09:09:03Z | 2021-06-14T09:13:27Z | 2021-06-14T08:29:07Z | null | Part of #2480. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2488/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2488/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2488.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2488",
"merged_at": "2021-06-14T08:29:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2488.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2488"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1142/comments | https://api.github.com/repos/huggingface/datasets/issues/1142/events | https://github.com/huggingface/datasets/pull/1142 | 757,413,920 | MDExOlB1bGxSZXF1ZXN0NTMyNzk1MjY0 | 1,142 | Fix PerSenT | [] | closed | false | null | 0 | 2020-12-04T21:21:02Z | 2020-12-14T13:39:34Z | 2020-12-14T13:39:34Z | null | New PR for dataset PerSenT | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1142/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1142/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1142.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1142",
"merged_at": "2020-12-14T13:39:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1142.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1142"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/193/comments | https://api.github.com/repos/huggingface/datasets/issues/193/events | https://github.com/huggingface/datasets/issues/193 | 624,655,558 | MDU6SXNzdWU2MjQ2NTU1NTg= | 193 | [Tensorflow] Use something else than `from_tensor_slices()` | [] | closed | false | null | 7 | 2020-05-26T07:19:14Z | 2020-10-27T15:28:11Z | 2020-10-27T15:28:11Z | null | In the example notebook, the TF Dataset is built using `from_tensor_slices()` :
```python
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
train_tf_dataset.set_format(type='tensorflow', columns=columns)
features = {x: train_tf_dataset[x] for x in columns[:3]}
labels = {"output_1": train_tf_dataset["start_positions"]}
labels["output_2"] = train_tf_dataset["end_positions"]
tfdataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(8)
```
But according to [official tensorflow documentation](https://www.tensorflow.org/guide/data#consuming_numpy_arrays), this will load the entire dataset to memory.
**This defeats one purpose of this library, which is lazy loading.**
Is there any other way to load the `nlp` dataset into TF dataset lazily ?
---
For example, is it possible to use [Arrow dataset](https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowDataset) ? If yes, is there any code example ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/193/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/193/timeline | null | completed | null | null | false | [
"I guess we can use `tf.data.Dataset.from_generator` instead. I'll give it a try.",
"Is `tf.data.Dataset.from_generator` working on TPU ?",
"`from_generator` is not working on TPU, I met the following error :\r\n\r\n```\r\nFile \"/usr/local/lib/python3.6/contextlib.py\", line 88, in __exit__\r\n next(self.gen)\r\n File \"/home/usr/.venv/bart/lib/python3.6/site-packages/tensorflow_core/python/eager/context.py\", line 1900, in execution_mode\r\n executor_new.wait()\r\n File \"/home/usr/.venv/bart/lib/python3.6/site-packages/tensorflow_core/python/eager/executor.py\", line 67, in wait\r\n pywrap_tensorflow.TFE_ExecutorWaitForAllPendingNodes(self._handle)\r\ntensorflow.python.framework.errors_impl.NotFoundError: No registered 'PyFunc' OpKernel for 'CPU' devices compatible with node {{node PyFunc}}\r\n . Registered: <no registered kernels>\r\n\r\n [[PyFunc]]\r\n```\r\n\r\n---\r\n\r\n@lhoestq It seems you merged some changes that allow lazy-loading. **Can you give an example of how to use ?** Maybe the Colab notebook should be updated with this method as well.",
"Could you send me the code you used to run create the dataset using `.from_generator` ? What version of tensorflow are you using ?",
"I'm using TF2.2\r\n\r\nHere is my code :\r\n```\r\nimport nlp\r\nfrom transformers import BartTokenizer\r\n\r\ntokenizer = BartTokenizer.from_pretrained('bart-large')\r\n\r\ndef encode(sample):\r\n article_inputs = tokenizer.encode_plus(sample[\"article\"], max_length=tokenizer.model_max_length, pad_to_max_length=True)\r\n summary_inputs = tokenizer.encode_plus(sample[\"highlights\"], max_length=tokenizer.model_max_length, pad_to_max_length=True)\r\n\r\n article_inputs.update({\"lm_labels\": summary_inputs['input_ids']})\r\n return article_inputs\r\n\r\ncnn_dm = nlp.load_dataset('cnn_dailymail', '3.0.0', split='test')\r\ncnn_dm = cnn_dm.map(encode)\r\n\r\ndef gen():\r\n for sample in cnn_dm:\r\n s = {}\r\n s['input_ids'] = sample['input_ids']\r\n s['attention_mask'] = sample['attention_mask']\r\n s['lm_labels'] = sample['lm_labels']\r\n yield s\r\n\r\ndataset = tf.data.Dataset.from_generator(gen, output_types={k: tf.int32 for k in ['input_ids', 'attention_mask', 'lm_labels']}, output_shapes={k: tf.TensorShape([tokenizer.model_max_length]) for k in ['input_ids', 'attention_mask', 'lm_labels']}\r\n```",
"Apparently we'll have to wait for the next tensorflow release to use `.from_generator` and TPU. See https://github.com/tensorflow/tensorflow/issues/34346#issuecomment-598262489",
"Fixed by https://github.com/huggingface/datasets/pull/339"
] |
https://api.github.com/repos/huggingface/datasets/issues/1153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1153/comments | https://api.github.com/repos/huggingface/datasets/issues/1153/events | https://github.com/huggingface/datasets/pull/1153 | 757,643,302 | MDExOlB1bGxSZXF1ZXN0NTMyOTkwMTk4 | 1,153 | Adding dataset for proto_qa in huggingface datasets library | [] | closed | false | null | 0 | 2020-12-05T09:43:28Z | 2020-12-05T18:53:10Z | 2020-12-05T18:53:10Z | null | Added dataset for ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning
Followed all steps for adding a new dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1153/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1153/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1153.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1153",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1153.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1153"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/997/comments | https://api.github.com/repos/huggingface/datasets/issues/997/events | https://github.com/huggingface/datasets/pull/997 | 755,185,517 | MDExOlB1bGxSZXF1ZXN0NTMwOTQ2MTIy | 997 | Microsoft CodeXGlue | [] | closed | false | null | 4 | 2020-12-02T11:21:18Z | 2021-06-08T13:42:25Z | 2021-06-08T13:42:24Z | null | Datasets from https://github.com/microsoft/CodeXGLUE
This contains 13 datasets:
code_x_glue_cc_clone_detection_big_clone_bench
code_x_glue_cc_clone_detection_poj_104
code_x_glue_cc_cloze_testing_all
code_x_glue_cc_cloze_testing_maxmin
code_x_glue_cc_code_completion_line
code_x_glue_cc_code_completion_token
code_x_glue_cc_code_refinement
code_x_glue_cc_code_to_code_trans
code_x_glue_cc_defect_detection
code_x_glue_ct_code_to_text
code_x_glue_tc_nl_code_search_adv
code_x_glue_tc_text_to_code
code_x_glue_tt_text_to_text
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/997/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/997/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/997.diff",
"html_url": "https://github.com/huggingface/datasets/pull/997",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/997.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/997"
} | true | [
"#978 is working on adding code refinement\r\n\r\nmaybe we should keep the CodeXGlue benchmark (as glue) and don't merge the code_refinement dataset proposed in #978 ?\r\n\r\ncc @reshinthadithyan",
"Hi @madlag and @lhoestq , I am extremely interested in getting this dataset into HF's library as I research in this area a lot. I see that it hasn't been updated in a while, but it is very close to being finished. If no one is currently working on this, I'd be happy to do any final touches that might be needed to get this merged.",
"Hi @ncoop57 ! Thanks for your interest and sorry for the inactivity on this PR.\r\nSure feel free to create another PR to continue this one ! This one was really close to being merged so I think it won't require that much changes. In addition to my previous comments, there should also be a \"Contributions\" subsection (see the template of the README [here](https://github.com/huggingface/datasets/blob/master/templates/README.md))",
"Superseded by https://github.com/huggingface/datasets/pull/2357 ."
] |
https://api.github.com/repos/huggingface/datasets/issues/32 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/32/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/32/comments | https://api.github.com/repos/huggingface/datasets/issues/32/events | https://github.com/huggingface/datasets/pull/32 | 610,715,580 | MDExOlB1bGxSZXF1ZXN0NDEyMTAzMzIx | 32 | Fix map caching notebooks | [] | closed | false | null | 0 | 2020-05-01T11:55:26Z | 2020-05-03T12:15:58Z | 2020-05-03T12:15:57Z | null | Previously, caching results with `.map()` didn't work in notebooks.
To reuse a result, `.map()` serializes the functions with `dill.dumps` and then it hashes it.
The problem is that when using `dill.dumps` to serialize a function, it also saves its origin (filename + line no.) and the origin of all the `globals` this function needs. However for notebooks and shells, the filename looks like \<ipython-input-13-9ed2afe61d25\> and the line no. changes often.
To fix the problem, I added a new dispatch function for code objects that ignore the origin of the code if it comes from a notebook or a python shell.
I tested these cases in a notebook:
- lambda functions
- named functions
- methods
- classmethods
- staticmethods
- classes that implement `__call__`
The caching now works as expected for all of them :)
I also tested the caching in the demo notebook and it works fine ! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/32/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/32/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/32.diff",
"html_url": "https://github.com/huggingface/datasets/pull/32",
"merged_at": "2020-05-03T12:15:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/32.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/32"
} | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.