repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 20,468 | closed | Fix doctests for audio models | # What does this PR do?
The condition was wrong (`[]` should be `()`) and failed some doctests as they get wrong classes
```python
if ["SequenceClassification" in model_class or "AudioClassification" in model_class]
```
should be
```python
if ("SequenceClassification" in model_class or "AudioClassification" in model_class)
``` | 11-28-2022 09:43:17 | 11-28-2022 09:43:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merge as the failing (TF) test is irrelevant to this PR. |
transformers | 20,467 | closed | Fix device issues in CLIPSeg tests | # What does this PR do?
Just add `to.(torch_device)` in a few places | 11-28-2022 09:07:34 | 11-28-2022 09:07:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,466 | closed | RAG README & pl version updated | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The current code for fine tuning examples/research_projects/rag' can't be executed without passing `--distributed_retriever ray` parameter.
In addition, `pytorch-lightning==1.5.10` should be recommended to prevent other miscellaneous error.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @shamanez
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-27-2022 15:35:12 | 11-27-2022 15:35:12 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20466). All of your documentation changes will be reflected on that endpoint.<|||||>> Thanks for your PR, but we don't maintain this example, nor do we want to update it to more recent versions of any libraries it uses
Hi, thanks for the reply. This PR is late response for https://github.com/huggingface/transformers/issues/18704#issue-1345102153 issue. Nevertheless, if this PR can't be accepted now, can you tell me the reasons for not mainitaining RAG?<|||||>The RAG model is maintained, it's just this example which is not. It clearly says in the README it should be run with PyTorch ligthning 1.3.1. If you want to run it with a more recent version you need to adapt the script, probably as you did, but I'm not changing what the original authors have done in this research project.
Maintained examples are in the pytorch/tensorflow/flax subfolders of the examples folder.<|||||>Agree with @sgugger actually (sorry for approving it)
But, ok for me to also update the example since fine-tuning RAG seems to be used quite a bit<|||||>@patrickvonplaten
Actually, with the latest PL versions, we can't use **DDPPlugin**. So my suggestion is to move with ray_distributed_retriever only. If we update the code with the current changes, it will fail.
I've added a comment in this in my last PR.
https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag/lightning_base.py#L269
So it would be better to come up with an if condition since ray_distributed_retriever's init function, doesn't take any parameters.
https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag/distributed_ray_retriever.py#L87
@kaiiwoo <|||||>@shamanez Why not host an updated version of the example on your repo then link to it from here and our community page in the doc?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,465 | closed | [Fix a small bug] Misleading use of variable names | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
It's just a small problem, but it may cause misunderstandings for subsequent developers. The author reversed the positions of pred and hypo. This pull request fixes this.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-27-2022 06:32:04 | 11-27-2022 06:32:04 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20465). All of your documentation changes will be reflected on that endpoint.<|||||>We do not maintain those examples, they are given as is :-) You can try pinging the original author to see if they agree with the change however.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,464 | closed | [fix bug] Although this bug will not have any impact on rag here (becβ¦ | β¦ause em is used as the evaluation metric), if you add bleu or rouge here, this bug will have an incorrect impact
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
There is a small bug in the rag implementation, the two parameters are reversed.(pred and hypo)
This error will not affect the experimental results here, but if you increase the evaluation metrics such as bleu or rouge, it will affect the results.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-27-2022 05:09:40 | 11-27-2022 05:09:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,463 | closed | Replace assertions with value errors on distilbert model | # Replace assertions with ValueErros on distilbert model
This PR is made to check if the valid checks made from #20433 pass for all cases.
Co-author: @batese2001
To: @younesbelkada | 11-26-2022 10:38:42 | 11-26-2022 10:38:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you so much!
<|||||>Thank you so much for your guidance and help! |
transformers | 20,462 | closed | Replace assertions with value errors on distilbert model #20433 | null | 11-26-2022 10:20:09 | 11-26-2022 10:20:09 | This is a demonstrative PR to see if #20433 errors are resolved or not.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20462). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,461 | closed | FixAuxiliaryLossForDeformableDetr | # What does this PR do?
DeformableDetr does not work when auxiliary_loss=True.
Since Deformable Detr has list of class_embed, bbox_embed, this code will raise NotImplementedError.
```python
intermediate = outputs.intermediate_hidden_states if return_dict else outputs[4]
outputs_class = self.class_embed(intermediate)
outputs_coord = self.bbox_embed(intermediate).sigmoid()
```
```python
outputs_class = self.class_embed(intermediate)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 201, in _forward_unimplemented
raise NotImplementedError
NotImplementedError
```
To fix this, we can simply use predefined `outputs_class` and `outputs_coord` in this [line](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deformable_detr/modeling_deformable_detr.py#L1943-L1944).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-26-2022 08:35:57 | 11-26-2022 08:35:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge Ping on this PR.<|||||>Hi @long8v would you be able to run `make fixup` from the root of the repo, and potentially rebase on the main branch to make the CI green?
Thanks!<|||||>I reuploaded this PR [here](https://github.com/huggingface/transformers/pull/20959)! |
transformers | 20,460 | closed | FixValidRatioForDeformableDetr | null | 11-26-2022 08:24:50 | 11-26-2022 08:24:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I don't think it is related with my PR. As you can see, my commit is about forward result in batch inference, and has nothing to do with repo consistency, build test, and so on.<|||||>Hi,
For repo consistency, you need to run `make fixup `from the root of the repo to fix style and quality of the code.<|||||>Hi, for repo consistency I ran `make fixup` and it raised same error with circleci, `No module named 'keras.saving.hdf5_format`. I found https://github.com/huggingface/transformers/issues/20393, so I downgraded tf==2.10 in local and it seems passed everything.
```
All done! β¨ π° β¨
1 file left unchanged.
python utils/custom_init_isort.py
python utils/sort_auto_mappings.py
doc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source
python utils/check_doc_toc.py --fix_and_overwrite
running deps_table_update
updating src/transformers/dependency_versions_table.py
python utils/check_copies.py
python utils/check_table.py
python utils/check_dummies.py
python utils/check_repo.py
Checking all models are included.
Checking all models are public.
2022-12-04 08:42:15.140190: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-12-04 08:42:15.963675: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib:/usr/local/lib:
2022-12-04 08:42:15.963836: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib:/usr/local/lib:
2022-12-04 08:42:15.963859: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Checking all models are properly tested.
Checking all objects are properly documented.
Checking all models are in at least one auto class.
python utils/check_inits.py
python utils/check_config_docstrings.py
python utils/tests_fetcher.py --sanity_check
python utils/update_metadata.py --check-only
2022-12-04 08:42:26.671899: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-04 08:42:26.916721: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-12-04 08:42:27.809325: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib:/usr/local/lib:
2022-12-04 08:42:27.809472: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib:/usr/local/lib:
2022-12-04 08:42:27.809493: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
```
and for others, I also tested all deformable detr related test codes(pytest two py files in `tests/models/deformable_detr`), and it passed everything.
Could you tell me what I should do else?<|||||>@long8v you'll need to rebase on the main branch to fix this issue:
```
git remote add upstream https://github.com/huggingface/transformers.git
git fetch upstream
git rebase upstream/main
```<|||||>I reuploaded this PR [here](https://github.com/huggingface/transformers/pull/20958)! |
transformers | 20,459 | closed | Efficientformer | This PR adds Efficientformer, a model that has similar latency as MobileNets, but achieves better accuracy on ImageNet. It is based on the closed PR: https://github.com/huggingface/transformers/pull/18296
Paper: https://arxiv.org/abs/2206.01191
Code and weights: https://github.com/snap-research/EfficientFormer
Fixes https://github.com/huggingface/transformers/issues/18041
## Who can review?
@alaradirik @NielsRogge | 11-25-2022 23:07:42 | 11-25-2022 23:07:42 | Hey @Bearnardd, thank you for working on this! Could you run `make fixup` to fix the failed style and code quality tests?
Also, type casting function arguments (e.g. `def something(arg1: torch.Tensor):`) causes errors if the type depends on a conditionally imported library (torch), you can see the failed test logs if you head over to the CI test details. Could you remove those from `test_modeling_efficientformer.py`?<|||||>Hi @alaradirik - thank you very much for the detailed review! I will address the changes shortly :) . I am aware of the failing tests but I am not entirely sure how to count the number of expected attentions and hidden layers for this particular model since it does not have a "standard" transformer based architecture. Nevertheless I think that I will address the current comments and as the next step I will ask you some questions about expected attention and hidden outputs.<|||||>> Hi @alaradirik - thank you very much for the detailed review! I will address the changes shortly :) . I am aware of the failing tests but I am not entirely sure how to count the number of expected attentions and hidden layers for this particular model since it does not have a "standard" transformer based architecture. Nevertheless I think that I will address the current comments and as the next step I will ask you some questions about expected attention and hidden outputs.
Hey @Bearnardd, no problem at all! We define our own small model architecture within the `test_modeling_efficientformer.py` file as it is faster to test with a smaller dummy model. You would just need to check `num_hidden_layers` and `num_attention_heads` attributes of the test class to see the expected number of layers. It seems the model has the correct number of attention heads and hidden layers but doesn't return all of the outputs (attentions and hidden state outputs from all layers).
If you are sure the implementation is correct and this is expected (in this case or other cases), you can always override the common tests within `test_modeling_efficientformer.py` by adding a method with the same name to the test class (`EfficientFormerModelTester`).
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @NielsRogge @sgugger - could you take a look at the changes?<|||||>@NielsRogge I have applied the changes. Would you mind to do the review?<|||||>@sgugger thanks good catch with the resolved one. Actually I have checked that and `self.ab` is used in the `EfficientFormerSelfAttention` forward method so in fact it is needed. Moreover the original `EfficientFormer` code is based on the `levit` model and in the levit code there is a similar method.<|||||>Model is on the hub under the following [path](https://huggingface.co/Bearnardd/efficientformer-l1-300).<|||||>> Thanks for the explanation on the train method. Could you just make sure that caching this tensor this way does not add a key to the state dict of the model? In LeViT, the cache is a dictionary, not a tensor, so there is no problem.
@sgugger I have checked that and it does not add a key.<|||||>Thanks @Bearnardd !
@NielsRogge I'll let you have one last look and merge if you're happy :-)<|||||>Thank you so much for working on this @Bearnardd! |
transformers | 20,458 | closed | [CLIPTokenizer] Improve warning | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As can be seen in this thread the warning is a bit confusing for libraries built on top of `transformes`: https://github.com/huggingface/diffusers/issues/1388#issuecomment-1327760610
Could we maybe downgrade it to a "info" statement and remove the mentioning of BERT?
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-25-2022 19:00:38 | 11-25-2022 19:00:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I feel it's a bit better with a comma or even a period.
```
"ftfy or spacy is not installed, using custom BasicTokenizer instead of ftfy."
```
I am super motivated to open a PR if you even allow me to just add a comma π |
transformers | 20,457 | closed | No module named 'keras.saving.hdf5_format' | I am running a virtual instance of Ubuntu 22.04LTS on Google Cloud. I have followed these instructions:[https://huggingface.co/docs/transformers/installation](url)
I am running solely CPU so I followed the instructions for that. Independently installed tensorflow, flax, and pytorch without error.
After doing this and using the test command:
`python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"
`
I get the following error:
```
(.env) colinvink2002@instance-1:~$ python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('we love you'))"
2022-11-25 18:45:40.340282: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-25 18:45:40.556836: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib/mesa-diverted/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu/mesa:/usr/lib/x86_64-linux-gnu/dri:/usr/lib/x86_64-linux-gnu/gallium-pipe
2022-11-25 18:45:40.556907: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-11-25 18:45:41.795427: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib/mesa-diverted/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu/mesa:/usr/lib/x86_64-linux-gnu/dri:/usr/lib/x86_64-linux-gnu/gallium-pipe
2022-11-25 18:45:41.795623: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/lib/mesa-diverted/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu/mesa:/usr/lib/x86_64-linux-gnu/dri:/usr/lib/x86_64-linux-gnu/gallium-pipe
2022-11-25 18:45:41.795646: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
No model was supplied, defaulted to distilbert-base-uncased-finetuned-sst-2-english and revision af0f99b (https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english).
Using a pipeline without specifying a model name and revision in production is not recommended.
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 629/629 [00:00<00:00, 298kB/s]
Traceback (most recent call last):
File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1076, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/colinvink2002/anaconda3/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/models/distilbert/modeling_tf_distilbert.py", line 34, in <module>
from ...modeling_tf_utils import (
File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 39, in <module>
from keras.saving.hdf5_format import save_attributes_to_hdf5_group
ModuleNotFoundError: No module named 'keras.saving.hdf5_format'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/pipelines/__init__.py", line 727, in pipeline
framework, model = infer_framework_load_model(
File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/pipelines/base.py", line 233, in infer_framework_load_model
_class = getattr(transformers_module, f"TF{architecture}", None)
File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1067, in __getattr__
value = getattr(module, name)
File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1066, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/colinvink2002/.env/lib/python3.9/site-packages/transformers/utils/import_utils.py", line 1078, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.distilbert.modeling_tf_distilbert because of the following error (look up to see its traceback):
No module named 'keras.saving.hdf5_format'
``` | 11-25-2022 18:52:00 | 11-25-2022 18:52:00 | Ah, I used `pip install git+https://github.com/huggingface/transformers` works now. |
transformers | 20,456 | closed | Fix typo in FSMT Tokenizer | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @stas00 | 11-25-2022 18:27:35 | 11-25-2022 18:27:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,455 | closed | Fix links of `contrastive_loss` | # What does this PR do?
We copied/pasted/replaced, but here we should not replace CLIP by the new model name. | 11-25-2022 17:00:58 | 11-25-2022 17:00:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,454 | closed | [trainer] apex test fix | there was a small mistake in a test of https://github.com/huggingface/transformers/pull/18961, this PR fixes it.
the main CI doesn't have apex installed that's why it missed it.
Thank you, @ydshieh for the heads up about the breakage
| 11-25-2022 16:20:43 | 11-25-2022 16:20:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,453 | closed | [Vision] Support different floating precision inputs from the `ImageProcessor` | # What does this PR do?
This PR introduces the input casting mechanism for image processors. Since the introduction of `accelerate` supported models for Vision, I have been playing around with half-precision models. I found it a bit inintuitive to manually cast the `pixel_values` outside the `ImageProcessor` class. Therefore for some models, [small hacks have been introduced](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/modeling_vit.py#L571-L574) to make the casting operation more user-friendly.
With this PR, it will be possible to cast the input tensors to any floating point precision, for any framework, at the`ImageProcessor` level as follows:
```
from transformers import ViTFeatureExtractor
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-large-patch32-384')
inputs = feature_extractor(images=image, return_tensors="np", float_precision="float16")
print(inputs.pixel_values.dtype)
>>> float16
```
The casting discards non-floating point tensors, therefore these tensors should not be affected by the casting mechanism (thinking for eg for `ViLT` that takes both text + image)
With this PR, the hacks introduced on ViT and OWLViT will be removed!
cc @amyeroberts @ydshieh | 11-25-2022 16:18:54 | 11-25-2022 16:18:54 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20453). All of your documentation changes will be reflected on that endpoint.<|||||>I think this PR is ready, at least as a PoC!
To make the PR complete, for now the arg `float_precision` needs to be manually added for each image processor. Before moving forward and start doing it for all image processors and adding tests, I would love to hear from @sgugger, @amyeroberts & @ydshieh to see if this is the approach we would like to follow!
Thanks again!<|||||>Thanks so much everyone for your comments!
After thinking a bit and trying to see if this could be useful for `flax`
```
import jax.numpy as jnp
from transformers import FlaxViTForImageClassification, ViTFeatureExtractor
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = FlaxViTForImageClassification.from_pretrained("google/vit-base-patch16-224", dtype=jnp.float16)
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224')
inputs = feature_extractor(images=image, return_tensors="np")
outputs = model(**inputs)
print(outputs)
```
it seems that `flax` can deal properly with different `dtype`, without having to explicitly cast the input. I think that a good point has been raised by @sgugger, however it could be useful if it is needed on `tf` side. If not, happy to change the PR to something that modifies only the `.to` function as this will be intended only for PyTorch.
<|||||>I don't have strong opinion though. So you can follow what @sgugger suggests. If we find it's useful for other frameworks, we can add them back.<|||||>Thanks everyone!
Let's keep this PR open in case we figure out this is needed for `tf`. I have opened a PR in #20536 for supporting dtypes in `.to`<|||||>> @gante @Rocketknight1 - how useful would this be in TF land?
I don't think our TF models are compatible with half-precision, right @Rocketknight1? At least I haven't used TF with half-precision :D <|||||>Extremely late reply on the TF front, but yeah, we aren't really running TF models in half precision right now. We do support mixed precision (similar to Torch AMP), but we don't officially support splatting the whole model to (b)float16 yet.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,452 | closed | Documentation missing is_decoder | ### System Info
Not relevant. Issue is about online documentation.
### Who can help?
@sgugger @stevhliu
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The bert documentation [page](https://huggingface.co/docs/transformers/model_doc/bert) mentions `is_decoder` 5 times but the BertConfig [class documentation](https://huggingface.co/docs/transformers/model_doc/bert) not a single time.
This probably affects other models as well.
### Expected behavior
BertConfig class documentation should contain an entry for `is_decoder`. | 11-25-2022 13:38:31 | 11-25-2022 13:38:31 | Thanks for flagging. Would you like to open a PR with a fix? |
transformers | 20,451 | closed | Wav2Vec2 adapter layer being ignored at random | ### System Info
Hi there!
During training, the hidden states of the base Wav2Vec2 model seem to be randomly skipping the adapter layer [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1322).
Just repeatedly running a forward pass with the same inputs will, on occasion, produce different output sequence lengths. When this happens I've logged the shapes before the adapter layer is applied as well as after, and they are the same, indicating that the layer is being skipped completely.
### Who can help?
Pinging @patrickvonplaten and @anton-l, please tell me I'm going crazy.
### Information
- [X] The official example scripts
- [ ] My own modified scripts
I'm using the current Colab environment.
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```python
from transformers import Wav2Vec2Model
model = Wav2Vec2Model.from_pretrained("anton-l/wav2vec2-base-lang-id",
add_adapter=True,
adapter_stride=2,
adapter_kernel_size=3,
num_adapter_layers=2)
model.train() # NB
dummy_input = torch.randn((1, 16000))
expected_output_sequence_length = 13
for _ in range(200):
output_shape = model(input_values=dummy_input)[0].shape[1]
if output_shape != expected_output_sequence_length:
print(output_shape)
### Expected behavior
The above loop shouldn't print anything out. | 11-25-2022 13:31:12 | 11-25-2022 13:31:12 | Hi, I think it's because `Wav2Vec2AdapterLayer` layers got dropped out here.
https://github.com/huggingface/transformers/blob/61d3928bfb3029bceb5be3e68ca3d4bf8456758f/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1006-L1009
You can pass `layerdrop=0` to `from_pretrained()` to deactivate it.
cc @patrickvonplaten, I understand the use of layerdrop in transformers structure, but why should we also have it in CNNs (Wav2Vec2AdapterLayer) ?<|||||>> Hi, I think it's because `Wav2Vec2AdapterLayer` layers got dropped out here.
>
> https://github.com/huggingface/transformers/blob/61d3928bfb3029bceb5be3e68ca3d4bf8456758f/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1006-L1009
>
> You can pass `layerdrop=0` to `from_pretrained()` to deactivate it.
>
> cc @patrickvonplaten, I understand the use of layerdrop in transformers structure, but why should we also have it in CNNs (Wav2Vec2AdapterLayer) ?
Wow, completely missed that. Thank you!<|||||>Hey @OllieBroadhurst! Are you using the adapter layer to fine-tune the Wav2Vec2 model standalone? The adapter layer works best when combining the Wav2Vec2 model in a sequence-to-sequence combination (see https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#warm-started-speech-encoder-decoder-model)<|||||>> Hey @OllieBroadhurst! Are you using the adapter layer to fine-tune the Wav2Vec2 model standalone? The adapter layer works best when combining the Wav2Vec2 model in a sequence-to-sequence combination (see https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition#warm-started-speech-encoder-decoder-model)
Hi @sanchit-gandhi!
It actually is for an encoder-decoder model. The reason it caused an issue is that I'm passing the encoder outputs to `inputs_embeds` instead of `encoder_hidden_states` and the positional embedding dim was smaller than the encoder output dim whenever the adapter layer was skipped. So definitely an edge case :) |
transformers | 20,450 | closed | fix `word_to_tokens` docstring format | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fix a format issue in the `word_to_tokens` docstring which prevented the method outputs from being displayed on the documentation site.
I also added an example for the None value returned.
Fix https://github.com/huggingface/transformers/issues/20449
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-25-2022 12:46:52 | 11-25-2022 12:46:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,449 | closed | Encoding.word_to_tokens() returns None within valid sequence | ### System Info
- `transformers` version: 4.23.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.10.6
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes(?)
- Using distributed or parallel set-up in script?: no
### Who can help?
@SaulLu @sgugger @stevhliu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Tokenize a sentence -> `BatchEncoding`
2. Iterate over `word_ids`
3. Call `word_to_chars(word_index)`
4. `TypeError` is raised at arbitrary word index (see output below)
```
MODEL_NAME = "DTAI-KULeuven/robbertje-1-gb-non-shuffled"
MODEL_MAX_LENGTH = 512
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME, model_max_length=MODEL_MAX_LENGTH, truncation=True
)
text = "Dit is een goede tekst."
encoding = tokenizer(text)
for word_index in range(len(encoding.word_ids())):
if word_index is not None:
print(word_index)
char_span = encoding.word_to_chars(word_index)
0
1
2
3
4
5
6
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
tokenization_test.ipynb Cell 3 in <cell line: 1>()
[2](vscode-notebook-cell:/tokenization_test.ipynb#W2sZmlsZQ%3D%3D?line=1) if word_index is not None:
[3](vscode-notebook-cell:/tokenization_test.ipynb#W2sZmlsZQ%3D%3D?line=2) print(word_index)
----> [4](vscode-notebook-cell:/tokenization_test.ipynb#W2sZmlsZQ%3D%3D?line=3) char_span = encoding.word_to_chars(word_index)
File ~/opt/anaconda3/envs/SoS/lib/python3.10/site-packages/transformers/tokenization_utils_base.py:615, in BatchEncoding.word_to_chars(self, batch_or_word_index, word_index, sequence_index)
613 batch_index = 0
614 word_index = batch_or_word_index
--> 615 return CharSpan(*(self._encodings[batch_index].word_to_chars(word_index, sequence_index)))
TypeError: transformers.tokenization_utils_base.CharSpan() argument after * must be an iterable, not NoneType
```
The word index is valid:
```
encoding.word_ids()[word_index:word_index+10]
[164, 165, 166, 166, 166, 166, 167, 168, 168, 168]
```
On further investigation, I have noticed that there is a work-around by validating there is a word-to-token mapping for the word index:
```
if word_index is not None and encoding.word_to_tokens(word_index) is not None:
[...]
```
So the underlying issue seems to be that `word_to_tokens()` sometimes returns None, although it seems counter-intuitive that there are words in a texts that do not have corresponding tokens.
### Expected behavior
`BatchEncoding.word_to_tokens()` should not output `None`; or it should be [documented](https://huggingface.co/docs/transformers/main_classes/tokenizer#transformers.BatchEncoding.word_to_tokens) why/if this can happen. | 11-25-2022 12:22:10 | 11-25-2022 12:22:10 | Hi @carschno ,
Thank you very much for bringing the problem to our attention! It is indeed information filled in the docstring of the method but there is a problem when rendering this documentation on our site. I'm trying to fix it in this PR #20450 .<|||||>I see, it is a documentation issue. Thanks for looking into it!
I still think it is counter-intuitive that there can be words without corresponding tokens. I can imagine some special cases, but perhaps it would be a good occasion to elaborate or exemplify those cases in the documentation a bit?<|||||>> I still think it is counter-intuitive that there can be words without corresponding tokens. I can imagine some special cases, but perhaps it would be a good occasion to elaborate or exemplify those cases in the documentation a bit?
Thanks for your feedback! We can indeed give an example in the documentation. It is a case that occurs in particular when we ask to the tokenizer to add special tokens to match a template. For example, if it is asked to add a class token at the beginning of the sentence, this token class token does not correspond to anything in the initial raw sentence.<|||||>Thanks again, that makes sense! However, the example case you describe does not really fit the case I have encountered. No template has been involved there.
Originally, I came across the issue in a long, OCR'd text with many special characters and erroneous tokens due to OCR errors. But in the example I used in my investigations (and pasted here), this is not the case either.
<|||||>> But in the example I used in my investigations (and pasted here), this is not the case either.
I could be wrong but it seems to me that your example uses a template. We can see it by running the following code:
```python
print(encoding.word_ids())
print(tokenizer.convert_ids_to_tokens(encoding.input_ids))
```
which gives:
```bash
[None, 0, 1, 2, 3, 4, 5, None]
['<s>', 'Dit', 'Δ is', 'Δ een', 'Δ goede', 'Δ tekst', '.', '</s>']
```
Here we can see that the `None` correspond to the "template" tokens `'<s>'` and `'</s>'`.
<|||||>I suppose you are right. I had some doubts because in my aforementioned original text (long erroneous), this occurred somewhere in the middle of the text. I will try to reproduce, but I guess there might have been special tokens as well due to longer sequences of whitespace and/or punctuation. |
transformers | 20,448 | closed | Could you do `pip show huggingface_hub`? | Could you do `pip show huggingface_hub`?
_Originally posted by @NielsRogge in https://github.com/huggingface/transformers/issues/20447#issuecomment-1327215502_
| 11-25-2022 11:06:37 | 11-25-2022 11:06:37 | This issue appears to have been opened in error. If this is the case @wccccp you can close this Issue Request using the buttons below π <|||||>> This issue appears to have been opened in error. If this is the case @wccccp you can close this Issue Request using the buttons below π
thank you |
transformers | 20,447 | closed | ImportError: cannot import name 'CommitOperationAdd' from 'huggingface_hub' (C:\Users\46213\anaconda3\lib\site-packages\huggingface_hub\__init__.py) | ### System Info
windows
python=3.9
transformers 4.23.1
pytorch 1.13.0 py3.9_cuda11.6_cudnn8_0 pytorch
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
>>> import transformers
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\46213\anaconda3\lib\site-packages\transformers\__init__.py", line 30, in <module>
from . import dependency_versions_check
File "C:\Users\46213\anaconda3\lib\site-packages\transformers\dependency_versions_check.py", line 17, in <module>
from .utils.versions import require_version, require_version_core
File "C:\Users\46213\anaconda3\lib\site-packages\transformers\utils\__init__.py", line 48, in <module>
from .hub import (
File "C:\Users\46213\anaconda3\lib\site-packages\transformers\utils\hub.py", line 32, in <module>
from huggingface_hub import (
ImportError: cannot import name 'CommitOperationAdd' from 'huggingface_hub' (C:\Users\46213\anaconda3\lib\site-packages\huggingface_hub\__init__.py)
>>>
### Expected behavior
please help me to solve the problemοΌοΌοΌ | 11-25-2022 09:29:13 | 11-25-2022 09:29:13 | Could you do `pip show huggingface_hub`?<|||||>>
(base) C:\Users\46213>pip show huggingface_hub
Name: huggingface-hub
Version: 0.10.1
Summary: Client library to download and publish models, datasets and other repos on the huggingface.co hub
Home-page: https://github.com/huggingface/huggingface_hub
Author: Hugging Face, Inc.
Author-email: [email protected]
License: Apache
Location: c:\users\46213\anaconda3\lib\site-packages
Requires: typing-extensions, requests, filelock, tqdm, packaging, pyyaml
Required-by: transformers, ltp, evaluate, datasets<|||||>For anyone finding this now, if you installed `transformers` and/or `huggingface_hub` with pip, try re-installing with conda. That solved it for me.<|||||>i.e. `conda install -c conda-forge transformers huggingface_hub`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,446 | closed | Add AltCLIP | # Adding AltCLIP
We add AltCLIP model.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-25-2022 08:02:43 | 11-25-2022 08:02:43 | @jongjyh @shunxing1234
If there is anything unclear, or you need help regarding the commit conflict, don't hesitate!<|||||>@sgugger @ydshieh Thanks! I'll fix it. :)
By the way, is there only processor need to be redefine? Do we need to check out all classes?<|||||>model, config and processor should be redefined, but not tokenizer, image processor, or feature extractor.
(@sgugger Is this correct?)
You can see the file structure in `models/x_clip` to get some idea<|||||>Hi @jongjyh @shunxing1234
I had to fix two files that contain `<<<< HEAD` (from previous conflicts when you merged HF's `main` into your `main`).
<|||||>@sgugger
Currently, the PR authors don't implement `AltCLIPVisionModel`, as the vision component is just the same as `CLIPVisionModel`. They do implement the necessary modules like `AltCLIPVisionTransformer`, as this is required in `AltCLIPModel`.
This seems fair to me.
However,
In the model tester file, they do
```
from ...models.clip.test_modeling_clip import CLIPVisionModelTester
```
and there is no `AltCLIPVisionModelTest` being implemented (same reason, it's just `CLIPVisionModelTest`).
~Do you think this is OK - I don't like the dependency though.~
@jongjyh @shunxing1234
IMO, despite the vision model is just `CLIPVisionModel`, I think for the completeness and max independency, it's good to have `AltCLIPVisionModel`. It should be very quick to add I believe. <|||||>No, the test files of models should be independent from each other, like the modeling files, or it will make our lives harder down the road for maintenance :-)<|||||>Hi @jongjyh @shunxing1234
Please go ahead with adding `AltCLIPVisionModel`, removing the usage of `CLIPVisionModelTester`, adding `AltCLIPVisionModelTester`.
This could reduce a few more test failures :-)<|||||>Hi, @ydshieh
thank you for your checking! I added alt_clipvision model right now, please let me know if there are other requests. :)<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Finally the CIs are green! I will take a review in more details!<|||||>Hey, @ydshieh @sgugger
Here is my new pr. :) Please help to check whether it meets the requirements<|||||>Hi, now we still need to convert the weights, I loaded the previous weight file under the new `modeling_altclip.py`:
> Some weights of the model checkpoint at BAAI/AltCLIP were not used when initializing AltCLIPModel: ['text_model.roberta.pooler.dense.bias', 'text_model.roberta.pooler.dense.weight']
It seems only pooler in Roberta need to be removed. Could I just upload a new `pytorch.bin` with no pooler to replace original one after results of the down-steam task were reproduced using this new modeling file?
I also notice processor file need to renew.
Thank you for your long-term follow-up and help!<|||||>> Could I just upload a new pytorch.bin with no pooler to replace original one after results of the down-steam task were reproduced using this new modeling file?
Good for me, as long as the results from the original model/checkpoint & from the added model calss/converted checkpoint matches.
<|||||>@jongjyh
It looks like the CI doesn't run anymore, but it indeed ran for some commits you pushed previously. I think you have followed this before, but could you try again π to trigger the CI. Thanks.
> It seems there is an issue with your CircleCI permissions, the tests won't run.
Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)?<|||||>cc @sgugger It's ready for merge IMO (other than doctesting - which the author and me will take care). But before merge, would like to have you final look/approve π Thanks.<|||||>Hi @ydshieh,
I am worried that there may be many dirty commits in this PR at present. Is there any way to merge these more than 100 commits before it is officially merged into the main branch? :)<|||||>Hi @jongjyh! No worry for the dirty commits, the GitHub page will `Squash and merge`, as shown in the button below :-)<|||||>Hi. the doctest for the config/modeling files pass now. Thank you a lot!
However, the doctest for `docs/source/en/model_doc/clip.mdx` has some problem, it won't be run.
To check
```bash
python3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules docs/source/en/model_doc/altclip.mdx -sv --doctest-continue-on-failure --doctest-glob="*.mdx"
```
I checked the file permission
```bash
ls -l docs/source/en/model_doc/
```
which shows that file `altclip.mdx` is executable file `-rwxr-xr-x`, but other files are not. See
```bash
-rw-r--r-- 1 root root 4795 Dec 21 01:32 albert.mdx
-rwxr-xr-x 1 root root 4829 Dec 23 10:01 altclip.mdx
-rw-r--r-- 1 root root 3491 Dec 21 01:32 audio-spectrogram-transformer.mdx
-rw-r--r-- 1 root root 7523 Dec 23 10:01 auto.mdx
-rw-r--r-- 1 root root 9454 Dec 23 10:01 bart.mdx
```
Could you fix this permission issue? Potentially replace that file with a newly created file with the content copied.
Thanks<|||||>@ydshieh @sgugger Hi, we update the pr, but there are some unexcepted issues(tf error) occured<|||||>Thanks a lot for all your work on this! There just needs to be a rebase on main and we can merge this.<|||||>Thank you again @jongjyh @shunxing1234 π ! |
transformers | 20,445 | closed | with pytorch cpu only version. without --no_cuda, using --bf16 will tβ¦ | β¦rigger error like "Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0"
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Library:
- trainer: @sgugger
| 11-25-2022 07:27:49 | 11-25-2022 07:27:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,444 | closed | update cpu related doc | null | 11-25-2022 05:30:06 | 11-25-2022 05:30:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@jianan-gu @sgugger please review the doc update |
transformers | 20,443 | closed | add timeout option for deepspeed engine | # What does this PR do?
This PR allows users to set socket timeout for deepspeed engine for multiple instance training.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-24-2022 19:26:46 | 11-24-2022 19:26:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,442 | closed | Error while importing pretrained model | ### System Info
- `transformers` version: 4.23.1
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.6
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj @patrickvonplaten @LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("gpt2", num_labels=2)
```
The above code, on execution, returns
```
RuntimeError: Failed to import transformers.models.gpt2.modeling_gpt2 because of the following error (look up to see its traceback):
name '_C' is not defined
```
This error is not specific to that model or to two labels. I ran the example code snippet given at https://huggingface.co/docs/transformers/training#train-with-pytorch-trainer. The same error is presented, albeit with a different model name.
### Expected behavior
From the tutorial, I was expecting a pretrained model object to be initialized, so that I could proceed with fine-tuning it. | 11-24-2022 18:47:02 | 11-24-2022 18:47:02 | Hi @orectique. Have you tried to do something like the following:
https://discuss.pytorch.org/t/nameerror-name-c-is-not-defined-while-importing-torch/124721/2
or
https://github.com/pytorch/pytorch/issues/1633#issuecomment-323435572<|||||>Thank you, @atturaioe. I explored both of those forums and tried the strategies. However, the error seems to have been something else. Anywho, in exasperation, I dumped the entire environment I was using and installed all the packages from scratch. That seems to have done it. |
transformers | 20,441 | closed | Add ViViT | # What does this PR do?
Fixes #15666
Add Video Vision Transformer to transformers. This PR implements a spacetime version of the Video Vision Transformer from the original paper.
I have provided the model weights here https://huggingface.co/jegormeister/vivit-b-16x2-kinetics400
I will try to add Factorised Encoder version later on (these are the two versions that authors provide weight for).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/15666
- [x] Did you make sure to update the documentation with your changes? I have added the documentation, but I have troubles testing it as I couldn't run the preview command of the doc-builder, so if someone has the possibility to run and check it, I will be really grateful!
- [x] Did you write any new necessary tests? WIP
## Who can review?
@LysandreJik answered to the original issue so would be great if you could assist with the PR or suggesting who could. Thanks!
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-24-2022 17:45:49 | 11-24-2022 17:45:49 | Hello, @jegork thanks for the PR! Having some experience in adding tests and fixing repo-consistency/style, I can help you with these aspects if you need any :+1: Feel free to tag me when needed.<|||||>cc @alaradirik and @NielsRogge <|||||>Hi @jegork, thanks for working on this!
I saw that it's not possible to import and test ViViT on your PR branch yet, this is because a lot of files need to be edited to properly import the new modules (ViViTConfig, ViViTModel, etc.). You can refer to this [PR](https://github.com/huggingface/transformers/pull/20459) to see what files need to be edited.
You can either make these changes manually or run the `transformers-cli add-new-model` command, which automatically takes care of a lot of these changes and initializes the model-specific files (modelling_vivit.py, etc.). You can learn more about this over [here.](https://github.com/huggingface/transformers/blob/main/docs/source/en/add_new_model.mdx)
Once you are done, you can run the `make fixup` command to make sure your code passes the [style, quality and repo consistency CI tests](https://huggingface.co/docs/transformers/contributing).
cc @sgugger @NielsRogge <|||||>Hey @alaradirik, thanks for your reply and guidance!
I will address what you suggested and add tests by the end of the week.
<|||||>@alaradirik I've fixed the structuring via the suggested `transformers-cli add-new-model` and have run `make fixup`.
All the relevant imports seem to work now (via `from transformers import ViViTModel, ViViTConfig, ViViTImageProcessor, ViViTFeatureExtractor, ViViTLayer, ViViTPreTrainedModel, ViViTForVideoClassification`)
Thanks again! Will add tests next.<|||||>@jegork really cool work, as a next step could you try to make the CI as green as possible? Currently there are many failing checks (10 failing and 9 successful). You can click on "details" -> "artifacts" -> "failures long" to see why exactly a check has failed.
You will also need to rebase on the main branch due to some issues with TF versions which are fixed on the upstream branch:
```
git remote add upstream https://github.com/huggingface/transformers.git
git fetch upstream
git rebase upstream/main
```
<|||||>@NielsRogge thanks for your reply! I was indeed looking at the CI and not picking up the issue because most of them display `No module named 'keras.saving.hdf5_format'` as the error, but it seems that is exactly what you mentioned - the recent update to the TF version in the main branch. I will look into that today!<|||||>Also please use `Vivit` to prefix all the classes (instead of `ViViT`), so `VivitConfig`, `VivitModel` etc (liek `BertConfig`, `BertModel` etc.). Users have complained a lot about the casing we use for our models as it makes them harder to find in the lib :-)<|||||>@NielsRogge apparently I have run what you provided incorrectly, sorry for that! <|||||>Okay np, fixed π<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20441). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||> @NielsRogge would be great if you could take a look at the changes I made. The CI also seems to be all good.
Thanks!<|||||>hey @NielsRogge, thank you for your comments!
I have a question regarding the conversion script. I am using `restore_checkpoint(flax_model_path, None)` from `flax.training.checkpoints`, however, using the current jax version 0.3.6 (it was installed on my machine automatically when installing dev transformers version as per documentation). Upgrading the version of flax to 0.3.25 fixes the issue (haven't tested lower versions). Is there any other way I could load a flax checkpoint?
<|||||>@NielsRogge i think I've addressed all of your points, though I have doubt whether I implemented correctly the transformations check in the conversion script (I couldn't come up with any better approach as the preprocessing code in the original implementation is kinda scattered among multiple files, but I have tried to leave as much as possible of notes there). Would be great if you could check that and, if applicable, suggest any improvements.
Thanks again!<|||||>seems like the test job fails with
```
FAILED tests/models/bridgetower/test_modeling_bridgetower.py::BridgeTowerModelTest::test_save_load_fast_init_to_base - AssertionError: 0.08835485577583313 not less than or equal to 0.001 : vision_model.visual.transformer.resblocks.11.attn.in_proj_weight not identical
```
after i rebased from the main branch. Seems like has nothing to do with this PR. Was it already fixed and shall I rebase again?
Hi @jegork! You're right, the failing test is unrelated to the PR. Could you wait for a day and rebase again?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry for the late reply here, I've assigned @amyeroberts to review the PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,440 | closed | Adding a repr to pipelines | ### Feature request
Would you be interested in adding a `__repr__` to the `Pipeline` class?
### Motivation
This could be used to display useful information after instantiation, particularly in interactive environments like Jupyter. A useful representation would be very useful to display what defaults were loaded by a library or model.
### Your contribution
I've been testing in my local install and I can submit a PR with the following:
```python
class Pipeline(_ScikitCompat):
...
def __repr__(self) -> str:
string_out = (
f"{type(self).__name__}(\n"
f" task={self.task},\n"
f" modelcard={self.modelcard},\n"
f" feature_extractor={self.feature_extractor},\n"
f" framework={self.framework},\n"
f" device={self.device},\n"
f" call_count={self.call_count},\n"
f" tokenizer={self.tokenizer},\n"
f" model.config={self.model.config},\n"
")"
)
return string_out
```
Which has output:
```python
TextClassificationPipeline(
task=text-classification,
modelcard=None,
feature_extractor=None,
framework=pt,
device=cpu,
call_count=3,
tokenizer=PreTrainedTokenizerFast(name_or_path='nlptown/bert-base-multilingual-uncased-sentiment', vocab_size=105879, model_max_len=512, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'}),
model.config=BertConfig {
"_name_or_path": "nlptown/bert-base-multilingual-uncased-sentiment",
"_num_labels": 5,
"architectures": [
"BertForSequenceClassification"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"directionality": "bidi",
"finetuning_task": "sentiment-analysis",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "1 star",
"1": "2 stars",
"2": "3 stars",
"3": "4 stars",
"4": "5 stars"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"label2id": {
"1 star": 0,
"2 stars": 1,
"3 stars": 2,
"4 stars": 3,
"5 stars": 4
},
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"position_embedding_type": "absolute",
"transformers_version": "4.20.1",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 105879
}
,
)
``` | 11-24-2022 17:36:11 | 11-24-2022 17:36:11 | cc @Narsil <|||||>@payoto ,
This seems like a good idea !
I'm slightly worried about the size of the `repr` though. It's already really large with your example, and you are missing the extra `_{preprocess,forward,postprocess}_params`. (Which are important imo)
As a start, I would use exactly what you have done, but only without the config.
For the `model` I would actually put it, but maybe only the class names bcause `repr(model)` is also quite verbose.<|||||>The reason I added the `model.config` entry was that it showed me information about the expected output of the pipeline, one of the things I wanted to find out was "what are all the possible classes returned by this pipeline"? Where would you recommend I get that information?
I'll open a PR with what you've suggested, and we can iterate from there.<|||||>> what are all the possible classes returned by this pipeline"?
You mean `config.id2label` ? I sympathize with the goal, but `repr` is meant to be seen by devs easily during iteration, so making them short is important IMO. But we can start with your version if you prefer and we'll adapt based on community feedback.
Your proposal is already much better than the current state !<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,439 | closed | Include image processor in add-new-model-like | # What does this PR do?
* Adds logic to `add-new-model-like` CLI to include image processors
* Updates `tests/utils/add_new_model_like.py` so tests run (was outdated)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 11-24-2022 17:29:59 | 11-24-2022 17:29:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh The test requires all three frameworks (PyTorch, TensorFlow and Flax) so it is never run in the CI.<|||||>OK, I completely misunderstand `Add model like runner / Add new model like template tests (pull_request)`.
Ignore my previous comment π . |
transformers | 20,438 | closed | transformers + deepspeed hangs when training on multiple GPUs | ### System Info
My code runs inside an NVIDIA docker container `nvcr.io/nvidia/pytorch:22.05-py3`.
The installed dependencies are listed here: https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html#framework-matrix-2022
I'm using the following versions for transformers and deepspeed:
- transformers==4.24.0
- deepspeed==0.7.5
### Who can help?
@stas00
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I want to train a model on multiple GPUs. The server I'm using has 8x A100 GPUs with 40GB each. I'm using deepspeed zero3 to partition the model across GPUs. Unfortunately, the code "hangs" mid execution and runs forever.
I can run the same code successfully on a different server with V100 GPUs. So I am assuming the issue might be related to the communcation between the GPUs? Not sure.
Below are the files I am using. I have also attached to output of the script below.
Thanks for your help!
Deepspeed config file:
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto",
"warmup_type": "linear"
}
},
"zero_optimization": {
"stage": 3,
"stage3_gather_16bit_weights_on_model_save": true,
"reduce_scatter": true,
"overlap_comm": true,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 100,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
Minimal python example:
import os
from transformers import AutoConfig, AutoModelForSequenceClassification, TrainingArguments, HfArgumentParser, Trainer
def main():
parser = HfArgumentParser(TrainingArguments)
training_args = parser.parse_args_into_dataclasses()[0]
config = AutoConfig.from_pretrained(
"facebook/opt-1.3b",
cache_dir=os.getenv("HF_MODELS_CACHE"),
)
model = AutoModelForSequenceClassification.from_pretrained(
"facebook/opt-1.3b",
from_tf=False,
config=config,
cache_dir=os.getenv("HF_MODELS_CACHE"),
)
trainer = Trainer(
model=model,
args=training_args,
)
if __name__ == "__main__":
main()
bash script to start the python script:
export NCCL_DEBUG=INFO
export NCCL_DEBUG_SUBSYS=ALL
export CUDA_LAUNCH_BLOCKING=1
export HF_MODELS_CACHE=/cache-dir
OUTPUT_DIR=/output-dir
deepspeed \
--num_gpus 2 \
--master_port 60000 \
./debug.py \
--output_dir $OUTPUT_DIR \
--deepspeed ./deepspeed_configs/ds_config_zero3.json
What happens:
- The code will run forever. No error message is shown.
### Expected behavior
The script terminates successfully. | 11-24-2022 15:13:08 | 11-24-2022 15:13:08 | This is the output produced by the minimal example. It keeps running forever and does not produce any new output.
Detected CUDA_VISIBLE_DEVICES=GPU-460af155,GPU-457e4df4,GPU-08f1eba5,GPU-4793f3fd,GPU-cbc5b6ef,GPU-aa661638,GPU-a39d482a,GPU-dc0ceb93 but ignoring it because one or several of --include/--exclude/--num_gpus/--num_nodes cl args were used. If you want to use CUDA_VISIBLE_DEVICES don't pass any of these arguments to deepspeed.
[2022-11-24 14:38:15,640] [INFO] [runner.py:508:main] cmd = /home/mmosbach/miniconda3/envs/llmft/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=60000 /home/mmosbach/projects/llmft/debug.py --output_dir /home/mmosbach/logs/llmft/logfiles --deepspeed /home/mmosbach/projects/llmft/deepspeed_configs/ds_config_zero3.json
[2022-11-24 14:38:18,207] [INFO] [launch.py:135:main] 0 NCCL_VERSION=2.12.10+cuda11.6
[2022-11-24 14:38:18,207] [INFO] [launch.py:135:main] 0 NCCL_DEBUG_SUBSYS=ALL
[2022-11-24 14:38:18,207] [INFO] [launch.py:135:main] 0 NCCL_DEBUG=INFO
[2022-11-24 14:38:18,207] [INFO] [launch.py:142:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2022-11-24 14:38:18,207] [INFO] [launch.py:148:main] nnodes=1, num_local_procs=2, node_rank=0
[2022-11-24 14:38:18,207] [INFO] [launch.py:161:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2022-11-24 14:38:18,208] [INFO] [launch.py:162:main] dist_world_size=2
[2022-11-24 14:38:18,208] [INFO] [launch.py:164:main] Setting CUDA_VISIBLE_DEVICES=0,1
[2022-11-24 14:38:24,319] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
mmosbach-20307:535:535 [0] NCCL INFO Bootstrap : Using eth0:172.17.0.2<0>
mmosbach-20307:535:535 [0] NCCL INFO NET/Plugin: Failed to find ncclNetPlugin_v6 symbol.
mmosbach-20307:535:535 [0] NCCL INFO NET/Plugin: Loaded net plugin NCCL RDMA Plugin (v4)
mmosbach-20307:535:535 [0] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v6 symbol.
mmosbach-20307:535:535 [0] NCCL INFO NET/Plugin: Loaded coll plugin SHARP (v4)
mmosbach-20307:535:535 [0] NCCL INFO cudaDriverVersion 11070
NCCL version 2.14.3+cuda11.7
mmosbach-20307:535:535 [0] NCCL INFO init.cc:1147 Cuda Host Alloc Size 4 pointer 0x7f18dc200000
mmosbach-20307:536:536 [1] NCCL INFO cudaDriverVersion 11070
mmosbach-20307:535:717 [0] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
mmosbach-20307:535:717 [0] NCCL INFO P2P plugin IBext
mmosbach-20307:535:717 [0] NCCL INFO NET/IB : No device found.
mmosbach-20307:535:717 [0] NCCL INFO NET/IB : No device found.
mmosbach-20307:535:717 [0] NCCL INFO NET/Socket : Using [0]eth0:172.17.0.2<0>
mmosbach-20307:535:717 [0] NCCL INFO Using network Socket
mmosbach-20307:536:536 [1] NCCL INFO Bootstrap : Using eth0:172.17.0.2<0>
mmosbach-20307:536:536 [1] NCCL INFO NET/Plugin: Failed to find ncclNetPlugin_v6 symbol.
mmosbach-20307:536:536 [1] NCCL INFO NET/Plugin: Loaded net plugin NCCL RDMA Plugin (v4)
mmosbach-20307:536:536 [1] NCCL INFO NET/Plugin: Failed to find ncclCollNetPlugin_v6 symbol.
mmosbach-20307:536:536 [1] NCCL INFO NET/Plugin: Loaded coll plugin SHARP (v4)
mmosbach-20307:536:536 [1] NCCL INFO init.cc:1147 Cuda Host Alloc Size 4 pointer 0x7feb60200000
mmosbach-20307:536:718 [1] NCCL INFO Plugin Path : /opt/hpcx/nccl_rdma_sharp_plugin/lib/libnccl-net.so
mmosbach-20307:536:718 [1] NCCL INFO P2P plugin IBext
mmosbach-20307:536:718 [1] NCCL INFO NET/IB : No device found.
mmosbach-20307:536:718 [1] NCCL INFO NET/IB : No device found.
mmosbach-20307:536:718 [1] NCCL INFO NET/Socket : Using [0]eth0:172.17.0.2<0>
mmosbach-20307:536:718 [1] NCCL INFO Using network Socket
mmosbach-20307:536:718 [1] NCCL INFO NET/Socket : GPU Direct RDMA Disabled for HCA 0 'eth0'
mmosbach-20307:535:717 [0] NCCL INFO NET/Socket : GPU Direct RDMA Disabled for HCA 0 'eth0'
mmosbach-20307:536:718 [1] NCCL INFO transport/p2p.cc:151 Cuda Alloc Size 2097152 pointer 0x7feb60c00000
mmosbach-20307:536:718 [1] NCCL INFO === System : maxBw 24.0 totalBw 24.0 ===
mmosbach-20307:535:717 [0] NCCL INFO transport/p2p.cc:151 Cuda Alloc Size 2097152 pointer 0x7f18dcc00000
mmosbach-20307:536:718 [1] NCCL INFO CPU/0 (1/2/-1)
mmosbach-20307:536:718 [1] NCCL INFO + PCI[5000.0] - NIC/0
mmosbach-20307:536:718 [1] NCCL INFO + PCI[24.0] - GPU/1000 (0)
mmosbach-20307:536:718 [1] NCCL INFO + PCI[24.0] - GPU/25000 (1)
mmosbach-20307:536:718 [1] NCCL INFO ==========================================
mmosbach-20307:536:718 [1] NCCL INFO GPU/1000 :GPU/1000 (0/5000.000000/LOC) GPU/25000 (2/24.000000/PHB) CPU/0 (1/24.000000/PHB)
mmosbach-20307:536:718 [1] NCCL INFO GPU/25000 :GPU/1000 (2/24.000000/PHB) GPU/25000 (0/5000.000000/LOC) CPU/0 (1/24.000000/PHB)
mmosbach-20307:536:718 [1] NCCL INFO Setting affinity for GPU 1 to ffffffff,ffffffff,00000000,00000000,ffffffff,ffffffff
mmosbach-20307:535:717 [0] NCCL INFO === System : maxBw 24.0 totalBw 24.0 ===
mmosbach-20307:535:717 [0] NCCL INFO CPU/0 (1/2/-1)
mmosbach-20307:535:717 [0] NCCL INFO + PCI[5000.0] - NIC/0
mmosbach-20307:535:717 [0] NCCL INFO + PCI[24.0] - GPU/1000 (0)
mmosbach-20307:535:717 [0] NCCL INFO + PCI[24.0] - GPU/25000 (1)
mmosbach-20307:535:717 [0] NCCL INFO ==========================================
mmosbach-20307:535:717 [0] NCCL INFO GPU/1000 :GPU/1000 (0/5000.000000/LOC) GPU/25000 (2/24.000000/PHB) CPU/0 (1/24.000000/PHB)
mmosbach-20307:535:717 [0] NCCL INFO GPU/25000 :GPU/1000 (2/24.000000/PHB) GPU/25000 (0/5000.000000/LOC) CPU/0 (1/24.000000/PHB)
mmosbach-20307:535:717 [0] NCCL INFO Setting affinity for GPU 0 to ffffffff,ffffffff,00000000,00000000,ffffffff,ffffffff
mmosbach-20307:536:718 [1] NCCL INFO Pattern 4, crossNic 0, nChannels 2, bw 12.000000/12.000000, type PHB/PIX, sameChannels 1
mmosbach-20307:536:718 [1] NCCL INFO 0 : GPU/0 GPU/1
mmosbach-20307:536:718 [1] NCCL INFO 1 : GPU/0 GPU/1
mmosbach-20307:536:718 [1] NCCL INFO Pattern 1, crossNic 0, nChannels 2, bw 22.000000/22.000000, type PHB/PIX, sameChannels 0
mmosbach-20307:536:718 [1] NCCL INFO 0 : GPU/0 GPU/1
mmosbach-20307:536:718 [1] NCCL INFO 1 : GPU/1 GPU/0
mmosbach-20307:536:718 [1] NCCL INFO Pattern 3, crossNic 0, nChannels 2, bw 22.000000/22.000000, type PHB/PIX, sameChannels 0
mmosbach-20307:536:718 [1] NCCL INFO 0 : GPU/0 GPU/1
mmosbach-20307:536:718 [1] NCCL INFO 1 : GPU/1 GPU/0
mmosbach-20307:535:717 [0] NCCL INFO Pattern 4, crossNic 0, nChannels 2, bw 12.000000/12.000000, type PHB/PIX, sameChannels 1
mmosbach-20307:535:717 [0] NCCL INFO 0 : GPU/0 GPU/1
mmosbach-20307:535:717 [0] NCCL INFO 1 : GPU/0 GPU/1
mmosbach-20307:535:717 [0] NCCL INFO Pattern 1, crossNic 0, nChannels 2, bw 22.000000/22.000000, type PHB/PIX, sameChannels 0
mmosbach-20307:535:717 [0] NCCL INFO 0 : GPU/0 GPU/1
mmosbach-20307:535:717 [0] NCCL INFO 1 : GPU/1 GPU/0
mmosbach-20307:535:717 [0] NCCL INFO Pattern 3, crossNic 0, nChannels 2, bw 22.000000/22.000000, type PHB/PIX, sameChannels 0
mmosbach-20307:535:717 [0] NCCL INFO 0 : GPU/0 GPU/1
mmosbach-20307:535:717 [0] NCCL INFO 1 : GPU/1 GPU/0
mmosbach-20307:536:718 [1] NCCL INFO Tree 0 : 0 -> 1 -> -1/-1/-1
mmosbach-20307:536:718 [1] NCCL INFO Tree 2 : 0 -> 1 -> -1/-1/-1
mmosbach-20307:536:718 [1] NCCL INFO Tree 1 : -1 -> 1 -> 0/-1/-1
mmosbach-20307:536:718 [1] NCCL INFO Tree 3 : -1 -> 1 -> 0/-1/-1
mmosbach-20307:535:717 [0] NCCL INFO Tree 0 : -1 -> 0 -> 1/-1/-1
mmosbach-20307:535:717 [0] NCCL INFO Tree 2 : -1 -> 0 -> 1/-1/-1
mmosbach-20307:536:718 [1] NCCL INFO Ring 00 : 0 -> 1 -> 0
mmosbach-20307:535:717 [0] NCCL INFO Tree 1 : 1 -> 0 -> -1/-1/-1
mmosbach-20307:536:718 [1] NCCL INFO Ring 01 : 0 -> 1 -> 0
mmosbach-20307:535:717 [0] NCCL INFO Tree 3 : 1 -> 0 -> -1/-1/-1
mmosbach-20307:536:718 [1] NCCL INFO Ring 02 : 0 -> 1 -> 0
mmosbach-20307:536:718 [1] NCCL INFO Ring 03 : 0 -> 1 -> 0
mmosbach-20307:535:717 [0] NCCL INFO Channel 00/04 : 0 1
mmosbach-20307:536:718 [1] NCCL INFO Trees [0] -1/-1/-1->1->0 [1] 0/-1/-1->1->-1 [2] -1/-1/-1->1->0 [3] 0/-1/-1->1->-1
mmosbach-20307:535:717 [0] NCCL INFO Channel 01/04 : 0 1
mmosbach-20307:535:717 [0] NCCL INFO Channel 02/04 : 0 1
mmosbach-20307:536:718 [1] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536)
mmosbach-20307:535:717 [0] NCCL INFO Channel 03/04 : 0 1
mmosbach-20307:535:717 [0] NCCL INFO Ring 00 : 1 -> 0 -> 1
mmosbach-20307:535:717 [0] NCCL INFO Ring 01 : 1 -> 0 -> 1
mmosbach-20307:535:717 [0] NCCL INFO Ring 02 : 1 -> 0 -> 1
mmosbach-20307:535:717 [0] NCCL INFO Ring 03 : 1 -> 0 -> 1
mmosbach-20307:535:717 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] -1/-1/-1->0->1 [2] 1/-1/-1->0->-1 [3] -1/-1/-1->0->1
mmosbach-20307:535:717 [0] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536)
mmosbach-20307:536:718 [1] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7feb60c00000
mmosbach-20307:536:718 [1] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7feb60c00600
mmosbach-20307:536:718 [1] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7feb60c00800
mmosbach-20307:536:718 [1] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7feb60c00e00
mmosbach-20307:535:717 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f18dcc00000
mmosbach-20307:536:718 [1] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7feb60c01000
mmosbach-20307:536:718 [1] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7feb60c01600
mmosbach-20307:535:717 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f18dcc00600
mmosbach-20307:536:718 [1] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7feb60c01800
mmosbach-20307:535:717 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f18dcc00800
mmosbach-20307:536:718 [1] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7feb60c01e00
mmosbach-20307:535:717 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f18dcc00e00
mmosbach-20307:535:717 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f18dcc01000
mmosbach-20307:535:717 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f18dcc01600
mmosbach-20307:535:717 [0] NCCL INFO channel.cc:23 Cuda Alloc Size 1152 pointer 0x7f18dcc01800
mmosbach-20307:535:717 [0] NCCL INFO channel.cc:27 Cuda Alloc Size 8 pointer 0x7f18dcc01e00
mmosbach-20307:536:719 [1] NCCL INFO Mem Realloc old size 0, new size 8 pointer 0x7feb48002c70
mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002e10
mmosbach-20307:535:720 [0] NCCL INFO Mem Realloc old size 0, new size 8 pointer 0x7f18d0000b60
mmosbach-20307:536:719 [1] NCCL INFO New proxy recv connection 0 from local rank 1, transport 0
mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0002ea0
mmosbach-20307:535:720 [0] NCCL INFO New proxy recv connection 0 from local rank 0, transport 0
mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7feb60e00000
mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7f18dce00000
mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002e50
mmosbach-20307:536:719 [1] NCCL INFO New proxy recv connection 1 from local rank 1, transport 0
mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0002ee0
mmosbach-20307:535:720 [0] NCCL INFO New proxy recv connection 1 from local rank 0, transport 0
mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7feb58000000
mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7f18d4000000
mmosbach-20307:536:719 [1] NCCL INFO New proxy recv connection 2 from local rank 1, transport 0
mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002e90
mmosbach-20307:535:720 [0] NCCL INFO New proxy recv connection 2 from local rank 0, transport 0
mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0002f20
mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7feb58a00000
mmosbach-20307:536:719 [1] NCCL INFO New proxy recv connection 3 from local rank 1, transport 0
mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002ed0
mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7f18d4a00000
mmosbach-20307:535:720 [0] NCCL INFO New proxy recv connection 3 from local rank 0, transport 0
mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0002f60
mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7feb59400000
mmosbach-20307:536:718 [1] NCCL INFO Channel 00/0 : 1[25000] -> 0[1000] via P2P/IPC
mmosbach-20307:536:719 [1] NCCL INFO New proxy send connection 4 from local rank 1, transport 0
mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002f10
mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:449 Cuda Alloc Size 10485760 pointer 0x7f18d5400000
mmosbach-20307:535:717 [0] NCCL INFO Channel 00/0 : 0[1000] -> 1[25000] via P2P/IPC
mmosbach-20307:535:720 [0] NCCL INFO New proxy send connection 4 from local rank 0, transport 0
mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0002fa0
mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7feb59e00000
mmosbach-20307:536:718 [1] NCCL INFO Channel 01/0 : 1[25000] -> 0[1000] via P2P/IPC
mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7f18d5e00000
mmosbach-20307:536:719 [1] NCCL INFO New proxy send connection 5 from local rank 1, transport 0
mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002f50
mmosbach-20307:535:717 [0] NCCL INFO Channel 01/0 : 0[1000] -> 1[25000] via P2P/IPC
mmosbach-20307:535:720 [0] NCCL INFO New proxy send connection 5 from local rank 0, transport 0
mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0002fe0
mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7feb61800000
mmosbach-20307:536:718 [1] NCCL INFO Channel 02/0 : 1[25000] -> 0[1000] via P2P/IPC
mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7f18dd800000
mmosbach-20307:536:719 [1] NCCL INFO New proxy send connection 6 from local rank 1, transport 0
mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002f90
mmosbach-20307:535:717 [0] NCCL INFO Channel 02/0 : 0[1000] -> 1[25000] via P2P/IPC
mmosbach-20307:535:720 [0] NCCL INFO New proxy send connection 6 from local rank 0, transport 0
mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0003020
mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7feb61a00000
mmosbach-20307:536:718 [1] NCCL INFO Channel 03/0 : 1[25000] -> 0[1000] via P2P/IPC
mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7f18dda00000
mmosbach-20307:536:719 [1] NCCL INFO New proxy send connection 7 from local rank 1, transport 0
mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48002fd0
mmosbach-20307:535:717 [0] NCCL INFO Channel 03/0 : 0[1000] -> 1[25000] via P2P/IPC
mmosbach-20307:535:720 [0] NCCL INFO New proxy send connection 7 from local rank 0, transport 0
mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d0003060
mmosbach-20307:536:719 [1] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7feb61c00000
mmosbach-20307:535:720 [0] NCCL INFO transport/p2p.cc:430 Cuda Alloc Size 2097152 pointer 0x7f18ddc00000
mmosbach-20307:536:718 [1] NCCL INFO Connected all rings
mmosbach-20307:536:718 [1] NCCL INFO Connected all trees
mmosbach-20307:536:718 [1] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
mmosbach-20307:536:718 [1] NCCL INFO 4 coll channels, 4 p2p channels, 2 p2p channels per peer
mmosbach-20307:535:717 [0] NCCL INFO Connected all rings
mmosbach-20307:535:717 [0] NCCL INFO Connected all trees
mmosbach-20307:536:719 [1] NCCL INFO Allocated 4194656 bytes of shared memory in /dev/shm/nccl-JKUXpI
mmosbach-20307:535:717 [0] NCCL INFO Latency/AlgBw | Tree/ LL | Tree/ LL128 | Tree/Simple | Ring/ LL | Ring/ LL128 | Ring/Simple | CollNetDirect/ LL | CollNetDirect/ LL128 | CollNetDirect/Simple | CollNetChain/ LL | CollNetChain/ LL128 | CollNetChain/Simple |
mmosbach-20307:535:717 [0] NCCL INFO Max NThreads | 512 | 640 | 512 | 512 | 640 | 512 | 0 | 0 | 512 | 0 | 0 | 512 |
mmosbach-20307:535:717 [0] NCCL INFO Broadcast | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 4.6/ 8.0 | 12.5/ 0.0 | 14.1/ 24.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 |
mmosbach-20307:535:717 [0] NCCL INFO Reduce | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 4.6/ 6.0 | 12.5/ 0.0 | 14.1/ 24.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 |
mmosbach-20307:535:717 [0] NCCL INFO AllGather | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 4.6/ 16.0 | 12.5/ 0.0 | 14.1/ 48.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 |
mmosbach-20307:535:717 [0] NCCL INFO ReduceScatter | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 4.6/ 16.0 | 12.5/ 0.0 | 14.1/ 48.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 | 0.0/ 0.0 |
mmosbach-20307:535:717 [0] NCCL INFO AllReduce | 6.4/ 5.3 | 8.2/ 0.0 | 56.0/ 20.2 | 5.6/ 6.0 | 15.0/ 0.0 | 19.8/ 24.0 | 5.4/ 0.0 | 5.4/ 0.0 | 27.7/ 0.0 | 4.4/ 0.0 | 4.4/ 0.0 | 16.0/ 0.0 |
mmosbach-20307:535:717 [0] NCCL INFO threadThresholds 8/8/64 | 16/8/64 | 512 | 512
mmosbach-20307:535:717 [0] NCCL INFO 4 coll channels, 4 p2p channels, 2 p2p channels per peer
mmosbach-20307:535:720 [0] NCCL INFO Allocated 4194656 bytes of shared memory in /dev/shm/nccl-ihjCjE
mmosbach-20307:536:719 [1] NCCL INFO New proxy send connection 8 from local rank 1, transport 2
mmosbach-20307:536:718 [1] NCCL INFO Connection to proxy localRank 1 -> connection 0x7feb48003010
mmosbach-20307:535:720 [0] NCCL INFO New proxy send connection 8 from local rank 0, transport 2
mmosbach-20307:535:717 [0] NCCL INFO Connection to proxy localRank 0 -> connection 0x7f18d00030a0
mmosbach-20307:536:719 [1] NCCL INFO transport/net.cc:376 Cuda Alloc Size 8388608 pointer 0x7feb47200000
mmosbach-20307:536:718 [1] NCCL INFO init.cc:367 Cuda Alloc Size 5168 pointer 0x7feb60c02000
mmosbach-20307:535:717 [0] NCCL INFO init.cc:367 Cuda Alloc Size 5168 pointer 0x7f18dcc02000
mmosbach-20307:535:720 [0] NCCL INFO transport/net.cc:376 Cuda Alloc Size 8388608 pointer 0x7f18c3200000
mmosbach-20307:535:717 [0] NCCL INFO init.cc:392 Cuda Host Alloc Size 33554432 pointer 0x7f18b6000000
mmosbach-20307:535:717 [0] NCCL INFO init.cc:398 Cuda Host Alloc Size 128 pointer 0x7f18dc200200
mmosbach-20307:535:717 [0] NCCL INFO comm 0x447a30b0 rank 0 nranks 2 cudaDev 0 busId 1000 - Init COMPLETE
mmosbach-20307:535:535 [0] NCCL INFO Broadcast: opCount 0 sendbuff 0x7f190a000000 recvbuff 0x7f190a000000 count 411828224 datatype 0 op 0 root 0 comm 0x447a30b0 [nranks=2] stream 0x447a2580
mmosbach-20307:536:718 [1] NCCL INFO init.cc:392 Cuda Host Alloc Size 33554432 pointer 0x7feb3a000000
mmosbach-20307:535:535 [0] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536)
mmosbach-20307:536:718 [1] NCCL INFO init.cc:398 Cuda Host Alloc Size 128 pointer 0x7feb60200200
mmosbach-20307:536:718 [1] NCCL INFO comm 0x43bb7070 rank 1 nranks 2 cudaDev 1 busId 25000 - Init COMPLETE
mmosbach-20307:536:536 [1] NCCL INFO Broadcast: opCount 0 sendbuff 0x7feb8a000000 recvbuff 0x7feb8a000000 count 411828224 datatype 0 op 0 root 0 comm 0x43bb7070 [nranks=2] stream 0x43bb63e0
mmosbach-20307:536:536 [1] NCCL INFO misc/utils.cc:235 memory stack hunk malloc(65536)
<|||||>Thank you for an excellent report, @mmarius
This is almost certain an issue that you'd need to report to the Deepspeed, since the hanging isn't related to HF integration. The only hanging that could happen in the integration is in `generate` if one doesn't set the gpu sync flag on. but I don't see you using it. The rest is core deepspeed.
But here are some suggestions based on my experience that might help:
1. This could be a hardware issue. Can you try the same code on a different server of the same setup?
2. Sometimes these help (try one at a time and see if the hanging goes away.
```
# do not remove or the training will hang and nodes will be lost w/o this workaround
export CUDA_LAUNCH_BLOCKING=1
# force crashing on nccl issues like hanging broadcast
export NCCL_ASYNC_ERROR_HANDLING=1
```
I see you have already tried the first one, I suppose it didn't help. it solved one huge hanging for BLOOM training.
3. if none of the above helps, time to get your hands dirty and run `py-spy` and see where it hangs.
You can of course run it on the process directly, as you only have 2.
But also you may want to read some multi-gpu `py-spy` recipes in:
- https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles-prequel.md
- https://github.com/bigscience-workshop/bigscience/blob/master/train/tr11-176B-ml/chronicles.md
and in general you might find some helpful notes in there. We had several hanging issues before we managed to get BLOOM-176B training on 384 A100s, albeit it was using Megatron-Deepspeed, which wasn't using ZeRO3, but sort of ZeRO-1 but customized to bf16, but the code is relatively similar and there is a lot of overlap with zero3.
when you report to Deepspeed they will definitely ask you for an output of `py-spy`
p.s. `pip install py-spy; py-spy dump --pid PID`<|||||>Thanks for your detailed reply, @stas00
I tried using
# force crashing on nccl issues like hanging broadcast
export NCCL_ASYNC_ERROR_HANDLING=1
but it didn't help.
Before getting into `py-spy` , I ran the this script (https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/troubleshooting.html#gpu-to-gpu-communication) to see if the GPU-to-GPU communication is working correctly on the server I am using and it seems that there are indeed some problems there. The latency is way too large.
P2P=Enabled Latency (P2P Writes) Matrix (us)
GPU 0 1 2 3 4 5 6 7
0 4.95 49206.88 49206.64 49206.69 49206.75 49206.68 49206.72 49206.72
1 49206.62 2.08 49206.51 49206.52 49206.42 49206.42 49206.39 49206.43
2 49206.70 49206.45 2.21 49206.45 49206.47 49206.56 49206.43 49206.49
3 49206.73 49206.53 49206.55 2.21 49206.59 49206.55 49206.55 49206.52
4 49206.77 49206.59 49206.57 49206.61 2.11 49206.60 49206.66 49206.60
5 49206.66 49206.47 49206.51 49206.49 49206.51 2.11 49206.46 49206.45
6 49206.82 49206.57 49206.61 49206.58 49206.62 49206.59 2.08 49206.60
7 49206.67 49206.51 49206.49 49206.46 49206.46 49206.47 49206.50 2.11
I will get back with more information once we resolved the problem.
Feel free to close the issue as it's definitely not a transformers problem. <|||||>oh, so the hardware issue! Thank you for the update
Also you can try this diagnostics script:
https://github.com/stas00/toolbox/blob/master/pytorch/torch-distributed-gpu-test.py<|||||>I ran your diagnostic script and as with my minimal example above it simply runs forever ... <|||||>yeah, so it's almost certain a hardware issue then.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Did you solved it? I meet the same problem @mmarius <|||||>@hahchenchen, hanging is a symptom, the problem leading to it can be completely different. Please see https://github.com/huggingface/transformers/issues/22142 which will tell you the cause.<|||||>@hahchenchen we fixed it by setting the `iommu`Β kernel parameter as follows: `iommu=soft`. In case your server has AMD CPUs this parameter has a different default value.
You can set this parameter in this file: `/etc/default/grub`. In our case it looks like this
`GRUB_CMDLINE_LINUX_DEFAULT="iommu=soft"` |
transformers | 20,437 | closed | Rework the pipeline tutorial | # What does this PR do?
- Switch to `asr` instead of another NLP task.
- It also has simpler to understand results.
- Added a section with interaction with `datasets`.
- Added a section with writing a simple webserver.
Should help users: https://github.com/huggingface/transformers/issues/20414
@stevhliu
@sgugger
@mishig25
If I could have some initial feedback on the general direction that'd be great.
After the direction is validated, I will go and fix all the tests and links.
And also check more the actual result formatting in the end documentation to add/remove stuff like `<tip>` to make it hopefully more readable.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-24-2022 14:45:49 | 11-24-2022 14:45:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Should we also make a brief mention of [chunk batching](https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-chunk-batching)?
I feel like it's nice that it is transparent for most users, but adding a link might be nice. How would you present it ?<|||||>I accepted most of the comments which are pure improvement save for two, where I think the result would be less than it is currently.
Also I feel the language tone is less human-like and more polished overall.
Making things polished and neutral is probably a lot better, especially for non natives.
I'm just mentioning that because for tutorials, I like when there's a story unraveling, and not too monotone.
Here I think the result with your suggestions are balanced between the two.
Thank you !!<|||||>Thanks @stevhliu for the remarks !
|
transformers | 20,436 | closed | Fix ESM checkpoints for tests | Some of the ESM tests were still using my checkpoints instead of `facebook/` ones, so this PR fixes that! Also, TF inference tests are now enabled. | 11-24-2022 13:39:13 | 11-24-2022 13:39:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,435 | closed | Add DPT-hybrid | ### Model description
DPT-hybrid is used in Stable Diffusion 2.0's depth model and is also a nice depth estimation model in general.
We already support [DPT](https://huggingface.co/docs/transformers/model_doc/dpt) - which is also the default model of our [depth estimation pipeline](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.DepthEstimationPipeline), but not DPT-hybrid. The latter uses ViT-hybrid as backbone.
Hence, we first would need to add ViT-hybrid to the library. This model is very similar to a regular ViT, except that instead of patchifying an image and embedding each patch, this model uses a pre-trained ResNetv2 to embed the image before feeding the features to a Transformer encoder.
This means that we first need to add ResNetv2 to the library. π
however that seems fairly doable, I'd recommend porting the one from timm: https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/resnetv2.py.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
DPT-hybrid is available here: https://github.com/isl-org/MiDaS | 11-24-2022 13:02:47 | 11-24-2022 13:02:47 | Hi @NielsRogge Can I take this up if no one is working on it?<|||||>This would be great @nandwalritik ! The Stable Diffusion Depth estimation model depends on it, so we'd definitely help you in whatever way we can :-) <|||||>@NielsRogge should we port the ResNetv2 exactly like ResNetv1: https://github.com/huggingface/transformers/blob/main/src/transformers/models/resnet/modeling_resnet.py or could we directly port it from `timm`?<|||||>Could you provide some more links on how to add ViT-Hybrid?<|||||>@patrickvonplaten yes we can add ResNetv2 in the same way as we added Resnet V1. Meaning, a separate standalone model in the library.
Once we have that, we have everything we need to define modeling_dpt_hybrid.py. This would be a copy of modeling_dpt.py, except that we leverage ResNetv2 (we can do that using the new AutoBackbone API which was just added) in the DPTViTEmbeddings class.
For stable diffusion there's no need to add hybrid ViT as a standalone model now (i.e. we can add modeling_hybrid_vit.py another time).<|||||>@NielsRogge @patrickvonplaten Can you let me know the next steps, what should I add first?
ResNetv2 -> DPT-hybrid, leaving ViT-hybrid as Niels mentioned above, will this be the order?<|||||>@nandwalritik yes, let's start by adding ResNetv2, based on timm's implementation found here: https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/resnetv2.py.
Next, we can add DPT-hybrid, based on the modeling files here: https://github.com/isl-org/DPT/tree/main/dpt
This can be done in 2 separate PR's<|||||>Ok I Will start with ResNetv2 and then I will try adding DPT-hybrid.<|||||>Should I use `add-new-model` or `add-new-model-like` command as we already have ResNet implementation in huggingface ?<|||||>You can use `add-new-model-like` and start from `resnet` (we might deprecate `add-new-model`) |
transformers | 20,434 | closed | make tensors in function build_relative_position created on proper device | β¦vice instead of always on cpu
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #20413
This PR makes tensors in function build_relative_position created on proper device instead of always on cpu.
More details in #20413
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 11-24-2022 07:05:51 | 11-24-2022 07:05:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,433 | closed | Replace assertions with ValueErros on distilbert model | Co-author: @batese2001
This is an extended PR from [here](https://github.com/huggingface/transformers/pull/20432) for new valid-checks to improve _code quality._
This raises exceptions on Multi-head Attention models for following specified conditions on the _Distilbert_ model.
This successfully replaces assertions with ValueErrors with the _Distilbert_ model related to #12789.
Would you be open to me changing assertions if I encounter other ones?
To: @younesbelkada | 11-24-2022 06:52:01 | 11-24-2022 06:52:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This is a new PR of my own based on your suggestion from [here](https://github.com/huggingface/transformers/pull/20375). files which improved to pass all the validity checks.
Can I ask for your further suggestions for this PR to be merged?
Any comments will be very appreciated!
Below mentions of PR #12789 is a related PR regarding the replacement of assertions with raising exceptions with conditions that are contrary to the pre-defined conditions.
<|||||>Hi @JuheonChu ! π
Thanks for the PR ;) will review it asap and let you know!<|||||>Thank you @younesbelkada for providing us with valuable suggestions! We will make changes and make anotehr PR!
Sincerely appreciate it :)<|||||>No worries @JuheonChu !
As I made you some suggestions, actually you can directly continue on this PR, let me guide you through this step by step:
Step 1: Go to "file changed" (top right of the github UI of this PR):
<img width="921" alt="Screenshot 2022-11-25 at 22 17 47" src="https://user-images.githubusercontent.com/49240599/204059724-181f2dbc-9e09-4031-bb8c-3499b3ab12ce.png">
Step 2: For each suggestion, click on "Add suggestion to batch", to add each suggestion
<img width="921" alt="Screenshot 2022-11-25 at 22 18 59" src="https://user-images.githubusercontent.com/49240599/204059801-2e1f1a1a-cf9b-43a9-8e9b-bf8d43ea4b84.png">
Step 3: Once you have added all the suggestions (make sure to add all of them), a pop up bar will appear on the top-right corner, and you can just click to it, and the suggestions will be pushed ;)
<img width="1467" alt="Screenshot 2022-11-25 at 22 19 43" src="https://user-images.githubusercontent.com/49240599/204059837-7f8bf1fc-4c63-4234-8dae-48b52b38eb20.png">
This way no need to open a new PR each time we do a suggestion ;) ! Let me know if anything is unclear! <|||||>Due to the reformatting of jupyter files, I created a new PR [here](https://github.com/huggingface/transformers/pull/20463) which passes all the validation checks. I apologize for any inconvenience, but do you mind if you can check [here](https://github.com/huggingface/transformers/pull/20463)?
To: @younesbelkada <|||||>Sure! I propose to close this PR as we moved everything on the other PR ;) |
transformers | 20,432 | closed | Raise Value Error on Distilbert Model | Co-author: @batese2001
This PR is a new PR extended from https://github.com/huggingface/transformers/pull/20375. | 11-24-2022 05:59:25 | 11-24-2022 05:59:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Validity Checks will be improved. Closing PR. |
transformers | 20,431 | closed | Last hidden states different for same input even when evaluating ViT model | ### System Info
Google Colab
### Who can help?
@NielsRogge @sg
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I've created a public Colab notebook: https://colab.research.google.com/drive/1CUpyNInQg2kw7gL-mYwX8ov8fc9HuAV-#scrollTo=DAXcuQXVtTkz
where a VitMAE model is created, and its weights saved using Trainer.
Then it is loaded again, and placed in eval mode. However the last hidden state changes when it is passed the same input - why might that be happening?
### Expected behavior
A model in eval mode should have a deterministic output for the same input | 11-24-2022 03:43:25 | 11-24-2022 03:43:25 | Hi,
Yes ViTMAE generates a random boolean mask to mask patches internally. This results in non-deterministic hidden states when you perform a forward pass twice on the same image.
To get deterministic behaviour, you can pass a noise tensor yourself as shown [here](https://github.com/huggingface/transformers/blob/afce73bd9d891b55dcb8d4d875d17718ffa01ff0/tests/models/vit_mae/test_modeling_vit_mae.py#L321). <|||||>Thanks!<|||||>Btw another way (in case you don't want to mask patches) is to set `config.mask_ratio=0.0` |
transformers | 20,430 | closed | Questions on ViT and ViT-MAE model image preprocessing | ### System Info
Latest version
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Hi @NielsRogge, I'm reading the ViT image processing from HF and its original implementation and have a few questions on image processing. Really appreciate your helping me understand the differences.
**ViT**
The JAX implementation from Google rescales the pixel range to [-1, 1]: https://github.com/google-research/vision_transformer/blob/main/vit_jax/input_pipeline.py#L214
HF rescales it by a factor of 1/255: https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/image_processing_vit.py#L80
**ViT MAE**
PyTorch implementation from Meta resizes the image with PIL.Image.BICUBIC interpolation: https://github.com/facebookresearch/mae/blob/efb2a8062c206524e35e47d04501ed4f544c0ae8/util/datasets.py#L59
HF uses BILINEAR https://github.com/huggingface/transformers/blob/main/src/transformers/models/vit/image_processing_vit.py#L78
### Expected behavior
I'd like to have clarifications of the questions above. | 11-24-2022 02:28:18 | 11-24-2022 02:28:18 | Hi,
Thanks for your interest in ViT and ViT MAE!
Regarding point 1; ViT was ported from the timm library, which also uses a scale factor of 1/255. This can be verified as follows:
```
from timm.data import resolve_data_config
from timm.data.transforms_factory import create_transform
model = timm.create_model('vit_base_patch16_224',pretrained=True)
model.eval()
# Create Transform
transform = create_transform(**resolve_data_config(model.pretrained_cfg, model=model))
print(transform)
```
which prints:
```
Compose(
Resize(size=248, interpolation=bicubic, max_size=None, antialias=None)
CenterCrop(size=(224, 224))
ToTensor()
Normalize(mean=tensor([0.5000, 0.5000, 0.5000]), std=tensor([0.5000, 0.5000, 0.5000]))
)
```
=> here, [ToTensor](https://pytorch.org/vision/stable/generated/torchvision.transforms.ToTensor.html) is used, which as stated in the docs converts a tensor with values 0 - 255 to the range [0-1]. Maybe @rwightman can clarify this.
Regarding point 2:
That's a valid point, not sure why I missed that. We could update this, although I'm not sure it has a big impact on downstream results. cc @sgugger <|||||>@NielsRogge @zhoutong-fu
ToTensor: uint8 [0, 255] -> float [0, 1.0]
Normalize(mean=tensor([0.5000, 0.5000, 0.5000]), std=tensor([0.5000, 0.5000, 0.5000])): float [0, 1.0] -> float [-1, 1]
Interpolation should be bicubic for both ViT and ViT MAE
<|||||>Thank you guys for the discussion.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,429 | closed | Pass output options to FLAVA multimodal transformer block | # What does this PR do?
Currently, the FLAVA model passes `output_attentions` and `output_hidden_states` only to its text & image blocks, preventing external access to cross-modal attentions in the multimodal block. This simple PR adds this ability by passing the `output_attentions` and `output_hidden_states` options to `FlavaMultimodalModel` during the `forward` pass.
I personally need to access these cross-modal attentions for implementing an auxiliary loss ([IAIS](https://github.com/lancopku/IAIS/)) which uses them, and this PR doesn't significantly change the behavior of this model.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@TristanThrush @apsdehal
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-24-2022 00:52:22 | 11-24-2022 00:52:22 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20429). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,428 | closed | Migarate old cache to transformers v4.22.0 | ### System Info
- `transformers` version: 4.22.0
- Platform: Linux-5.4.0-1089-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When update transformers from v4.12.3 to v4.22.0, got following message:
```
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`.
Moving 55 files to the new cache system
0%
0/55 [00:00<?, ?it/s]
There was a problem when trying to move your cache:
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/utils/hub.py", line 1127, in <module>
move_cache()
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/utils/hub.py", line 1071, in move_cache
hub_metadata[url] = get_hub_metadata(url, token=token)
File "/home/ubuntu/anaconda3/lib/python3.7/site-packages/transformers/utils/hub.py", line 996, in get_hub_metadata
huggingface_hub.file_download._raise_for_status(r)
Please file an issue at https://github.com/huggingface/transformers/issues/new/choose and copy paste this whole message and we will do our best to help.
```
### Expected behavior
The migrating process can be done successfully. | 11-23-2022 21:35:02 | 11-23-2022 21:35:02 | I am also facing this issue with 4.24<|||||>Looks like there was a connection problem when the util tried to migrate the cache. If you don't care about the cached models, you can just ignore this, everything will work fine.
To try again to move an old cache to the new format, you can execute
```
from transformers.util.hub import mvoe_cache
move_cache()
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,427 | closed | Add deprecation warning when image FE instantiated | # What does this PR do?
Adds a deprecation warning if someone tries to create a vision feature extractor.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 11-23-2022 21:17:19 | 11-23-2022 21:17:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,426 | closed | Pipeline testing - using tiny models on Hub | # What does this PR do?
Pipeline testing - using tiny models on Hub.
A few comments:
- This PR moves the tiny model creation from `PipelineTestCaseMeta` (where done dynamically during testing) to `utils/create_dummy_models.py` (where the tiny models are created once and live on the Hub):
- The logic is still large, but at least it is done once rather than being created dynamically
- When a new model is added in `transformers`, it would **NOT** be used in pipeline testing **UNTIL** we create & upload tiny models for the new model type.
- even if we upload the new tiny models (or re-create the existing one), we also have to **UPDATE** [this repo](https://huggingface.co/datasets/hf-internal-testing/tiny-random-model-summary), see comments below
- While `pytest` collects the tests to run, the collection is done **in each process** (if we specify `-n N` with `N > 1`):
- If we use `from_pretrained` during test collection, there will be too many calls, and the server refuses the requests at some point: the tests being collected will **vary each time and being incomplete**
- So I upload a file [processor_classes.json](https://huggingface.co/datasets/hf-internal-testing/tiny-random-model-summary/blob/main/processor_classes.json) containing necessary information to call `gen_test`. The `from_pretrained` will only be called when the test is actually running.
- Some tests are just not working (yet), and an important subset of those failed tests is not tested in the current `main` branch
- for example, on `main`, all pipeline tests use fast tokenizers
- we probably need to check (and possibly fix) some of them, but depends on `impact` and `usage`, we will leave some of them skipped for now | 11-23-2022 20:46:25 | 11-23-2022 20:46:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Gently pining @LysandreJik for review at their convenience. We discussed last time offline the pipeline testing will eventually avoid using metaclass - I will work on it in a future PR. I think it's better to have progressive changes toward our ultimate goal ππ<|||||>Thank you for the ping, I'll have a look!<|||||>Hi @Narsil
In this PR, commit [3d46ed81](https://github.com/huggingface/transformers/pull/20426/commits/3d46ed81a7b688b13a85114b9f4168242e24e902), I revert some changes in your (merged) PR #20851.
In short: this `def get_test_pipeline(self, model, tokenizer, feature_extractor, image_processor):` is changed to `get_test_pipeline(self, model, tokenizer, processor):`
Before you PR, it was `get_test_pipeline(self, model, tokenizer, feature_extractor):`
More context:
- This PR leverages the uploaded checkpoints on the Hub for pipeline testing.
- In a follow-up PR, we plan to remove the usage of `PipelineTestCaseMeta`
- (therefore, this particular change will be just short-lived)
Let me know if you have any question or comment π
<|||||>This change was necessary to get some tests running.
Namely testing that oneformer and the like are actually working. These models **do not** have a feature extractor, only a `ImageProcessor`. So how can you make it work?
Since you're using tiny models maybe that function could be bypassed entirely?
Also for the network issue (too many from pretrained iiuc) isn't there a way to download all tiny models once and keep them on the runner so we could run the tests in offline mode? So no network calls? Maybe running network mode if there's a failure? (so downloading the new model) <|||||>@Narsil
> These models do not have a feature extractor, only a ImageProcessor. So how can you make it work?
- The creation and upload of tiny models (which is done in another script) should create the tokenizers and/or processors (feature extractors or image processor). During pipeline testing, we just load them. I don't see any problem here, but let me know if I miss any detail.
- (however, the tiny model creation should be run in a regular basis (or triggered by some conditions) in order to make the tiny checkpoints for newly added models available on the hub)
- this is not done yet, but I will work on it too
- `oneformer` doesn't have a tiny model checkpoint yet, so not tested by this PR
- but for other models, even they only have image processors, the tests could pass already
> Also for the network issue (too many from pretrained iiuc) isn't there a way to download all tiny models once and keep them on the runner
On our hosted runners, it's fine (i.e. cached). But what I mentioned is for pull request CI - which runs on `CircleCI`. So far I haven't looked into how to do similar things on it. |
transformers | 20,425 | closed | Add Donut image processor | # What does this PR do?
Adds an image processor for Donut.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? | 11-23-2022 20:42:38 | 11-23-2022 20:42:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'm currently unable to create a PR myself but please note that `DonutImageProcessor.preprocess()` still calls `.pad()` instead of `.pad_image()`, generating a *lot* of logger noise...<|||||>@pasky Thanks for raising! This should now be resolved with the merging of #20904 |
transformers | 20,424 | closed | Make `add_special_tokens` more clear | # What does this PR do?
Fix tokenization. The short term goal is to fix #20418 (well, make things more clear)
The long term goal is to be able to have a generic fix for #20401, but the tokenizer thing is somehow complicated, and I can only fix step by step.
| 11-23-2022 20:31:31 | 11-23-2022 20:31:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,423 | closed | ImportError: tokenizers>=0.11.1,!=0.11.3,<0.14 is required for a normal functioning of this module, but found tokenizers==0.10.3. Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main | ### System Info
I git the newest version
OS ubuntu system
python3
```
ImportError: tokenizers>=0.11.1,!=0.11.3,<0.14 is required for a normal functioning of this module, but found tokenizers==0.10.3.
Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main
```
### Who can help?
@SaulLu
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Traceback (most recent call last):
File "AI_clone.py", line 11, in <module>
from transformers import AutoModelForCausalLM, AutoTokenizer
File "/home/fish/anaconda3/envs/comp3340/lib/python3.7/site-packages/transformers/__init__.py", line 30, in <module>
from . import dependency_versions_check
File "/home/fish/anaconda3/envs/comp3340/lib/python3.7/site-packages/transformers/dependency_versions_check.py", line 41, in <module>
require_version_core(deps[pkg])
File "/home/fish/anaconda3/envs/comp3340/lib/python3.7/site-packages/transformers/utils/versions.py", line 123, in require_version_core
return require_version(requirement, hint)
File "/home/fish/anaconda3/envs/comp3340/lib/python3.7/site-packages/transformers/utils/versions.py", line 117, in require_version
_compare_versions(op, got_ver, want_ver, requirement, pkg, hint)
File "/home/fish/anaconda3/envs/comp3340/lib/python3.7/site-packages/transformers/utils/versions.py", line 51, in _compare_versions
f"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}"
ImportError: tokenizers>=0.11.1,!=0.11.3,<0.14 is required for a normal functioning of this module, but found tokenizers==0.10.3.
Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main
### Expected behavior
How to use the Transformer lib without this error | 11-23-2022 19:17:45 | 11-23-2022 19:17:45 | Hi @Justinfungi,
Could you share the command you run that generates this error? Possibly the associated python script?
The error suggests that you have a version problem in your python environment. How did you install transformers? Did you try to run the recommended command "Try: `pip install transformers -U` or `pip install -e '.[dev]'` if you're working with git main" ? <|||||>```
import customtkinter as ctk
import os
import torch
#import torchaudio
from transformers import AutoModel, AutoTokenizer
import tortoise
#from tortoise.utils.audio import load_voice
#import vlc
#from tkVideoPlayer import TkinterVideo
import tkinter as tk
from tkinter import ttk
# β
Works
app = tk.Tk()
app.geometry("700x700")
app.title("Justris")
ctk.set_appearance_mode("dark")
```
This is my python script.
i use conda forge to install the transformer
I have run ``` pip install transformers -U ```. It don't generate any error<|||||>Thanks for these details. :hugs:
Given your error, I think that what should solve your problem is to install a version of tokenizer compatible as indicated in the error message., e.g. `pip install tokenizers==0.13.2`. Let me know if this works<|||||>[Voice - Jupyter Notebook.pdf](https://github.com/huggingface/transformers/files/10089433/Voice.-.Jupyter.Notebook.pdf)
i dont know why this error still exist<|||||>Couldn't the problem be that the package is not installed in your virtual environment since you are running the start from your jupyter notebook?
Cf [this thread](https://stackoverflow.com/questions/38368318/installing-a-pip-package-from-within-a-jupyter-notebook-not-working)<|||||>> Couldn't the problem be that the package is not installed in your virtual environment since you are running the start from your jupyter notebook?
>
> Cf [this thread](https://stackoverflow.com/questions/38368318/installing-a-pip-package-from-within-a-jupyter-notebook-not-working)
i think i have pip install both in jupyter and terminal. But it doesnt work. it is so werid<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,422 | closed | Linguistic words (not word pieces) enter Transformer | ### System Info
Linux Debian
Bert
GPT
### Who can help?
@ArthurZucker
@SaulLu
@LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi,
The Tokenizer from the pretrained model tokenizes natural words (delimited by whitespace) into word pieces automatically. For example,
```python
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
tokenizer.tokenize("Laced with dreams-dripping in reality, the American Dream reignites after 9.11 with a true story about the Devil Ray's mid-life rookie , Jimmy Morris. ")
['[CLS]', 'laced', 'with', 'dreams', '-', 'dripping', 'in', 'reality', ',', 'the', 'american', 'dream', 'reign', '##ites', 'after', '9', '.', '11', 'with', 'a', 'true', 'story', 'about', 'the', 'devil', 'ray', "'", 's', 'mid', '-', 'life', 'rookie', ',', 'jimmy', 'morris', '.', '[SEP]']
# gpt2
['<|endoftext|>', 'L', 'aced', 'Δ with', 'Δ dreams', 'Δ -', 'Δ dripping', 'Δ in', 'Δ reality', ',', 'Δ the', 'Δ American', 'Δ Dream', 'Δ reign', 'ites', 'Δ after', 'Δ 9', '.', '11', 'Δ with', 'Δ a', 'Δ true', 'Δ story', 'Δ about', 'Δ the', 'Δ Devil', 'Δ Ray', "'s", 'Δ mid', '-', 'life', 'Δ rookie', ',', 'Δ Jimmy', 'Δ Morris', '.', '<|endoftext|>']
# xlnet-base-cased
['<cls>', 'βLac', 'ed', 'βwith', 'βdreams', 'β', '-', 'βdripping', 'βin', 'βreality', ',', 'βthe', 'βAmerican', 'βDream', 'βreign', 'ites', 'βafter', 'β9', '.', '11', 'βwith', 'βa', 'βtrue', 'βstory', 'βabout', 'βthe', 'βDevil', 'βRay', "'", 's', 'βmid', '-', 'life', 'βrookie', ',', 'βJimmy', 'βMorris', '.', '</s>']
# xlm-mlm-enfr-1024
['<s>', 'laced</w>', 'with</w>', 'dreams</w>', '-</w>', 'dri', 'pping</w>', 'in</w>', 'reality</w>', ',</w>', 'the</w>', 'americ', 'an</w>', 'dream</w>', 're', 'ign', 'ites</w>', 'after</w>', '9.', '11</w>', 'with</w>', 'a</w>', 'true</w>', 'story</w>', 'about</w>', 'the</w>', 'devil</w>', 'ray</w>', "'s</w>", 'mid</w>', '-</w>', 'life</w>', 'rookie</w>', ',</w>', 'j', 'im', 'my</w>', 'mor', 'ris</w>', '.</w>', '</s>']
```
However, I want to tokenize the sentence into linguistic words rather than word pieces when the Transformer pretrained model is introduced and its Tokenizer is employed. I want to use natural words to enter transformer.
The result I want to get and the natural words enter the Transforer model to do some calculations.
```
['Laced', 'with', 'dreams-dripping', 'in', 'reality', ',', 'the', 'American', 'Dream', 'reignites', 'after', '9.11', 'with', 'a', 'true', 'story', 'about', 'the', 'Devil', 'Ray', "'s", 'mid-life', 'rookie', ',', 'Jimmy', 'Morris', '.']
```
How to make some setup in the Tokenizer to realize this?
Many thanks!
Best, Kevin
### Expected behavior
The result I want to get is the natural words which enter the Transforer model to do some calculations.
```
['Laced', 'with', 'dreams-dripping', 'in', 'reality', ',', 'the', 'American', 'Dream', 'reignites', 'after', '9.11', 'with', 'a', 'true', 'story', 'about', 'the', 'Devil', 'Ray', "'s", 'mid-life', 'rookie', ',', 'Jimmy', 'Morris', '.']
```
How to make some setup in the Tokenizer to realize this? | 11-23-2022 18:47:12 | 11-23-2022 18:47:12 | The model only "knows" the tokens present in the tokenizer vocabulary, so you won't be able to pass something else to it.<|||||>Many thanks! @sgugger
I am wondering whether a separate third-party tokenizer tool (from existing packages) is able to first split sentences into natural words, and compute alignments between full words and the sub-words split by Transformer tokenzier.
It will be greatly appreciated if you could kindly point out these tools.<|||||>Hi @fivehills,
The fast versions of our tokenizers have methods that will surely be useful to you. I advise you to look at [the section " Fast tokenizers' special powers"](https://huggingface.co/course/chapter6/3?fw=pt ) of our course that will explain how you can map tokens to "words" and map the tokens back to the input sentence.<|||||>@SaulLu Many thanks!
[the section " Fast tokenizers' special powers"](https://huggingface.co/course/chapter6/3?fw=pt) can solve the problem of alignment between word pieces and natural words.<|||||>I'm glad this helped you! I'm closing this issue as it seems to have been resolved :blush: |
transformers | 20,421 | closed | add in layer gpt2 tokenizer | # What does this PR do?
- Adds in layer `TFGPT2Tokenizer` to enable serialization and serving it with TF Serving
- Small fixes on Tensorflow generation utils and GPT2 attentions to use `tf.shape` instead of `Tensor.shape` to solve max sequence length; and
- Explicitly unstacking the past key values on TF GPT2 Attention layer to avoid `None` shape issues that come with `generate` on TF compiled models.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Addresses first step of #19992
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. -> https://github.com/huggingface/transformers/issues/19992
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-23-2022 18:25:13 | 11-23-2022 18:25:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Just figured we have to solve some max lengths on the xla_tests. Changing the way we get the shapes broke some stuff.
Tomorrow I'll fix it either on the method or on all the broken tests. Feel free to do it if anyone here is in a rush. <|||||>Actually, if you want I can handle the generate stuff on another PR, but IMHO it makes sense to keep it here, as it ensures the model will actually work along with the tokenizer. <|||||>Yeah, I would definitely keep any generate fixes that you need in this PR. Overall this looks extremely good, though, and great job figuring out a mapping from the existing BPE tokenizers to TF BPE that works!<|||||>Because this is touching the core library (by adding a `keras-nlp` dependency and an `is_keras_nlp_available()` check) and TF `generate()` code, I'm going to ping @sgugger and @gante to take a look too just to make sure it's all okay!<|||||>Yeah @Rocketknight1 actually I'm think that the generate thing can get bigger than this PR. Let me cut it out and open another PR for that.<|||||>Actually, just did it. Let's keep it simple and handle TF hell on another time. <|||||>Totally fine with that too - as long as it works for the core model we can add extra features over time. I'd love to see how many models in our codebase we could just copy-paste this new approach to as well!<|||||>@Rocketknight1 it works for the core model as per a test on this PR. Do you have any idea on why the add model like runner test is not passing? It seems related to some strange code quality check on a file I didn't change.<|||||>@piEsposito This happens sometimes - it's usually because you just happened to fork the repo at a bad time when that bug was present. The easiest way to fix these issues is just to rebase your PR and force push.
Also, since we're adding a new dependency here I want to wait to get a couple of reviews from the core team, but half the company is missing for Thanksgiving, so things might be a little slow with this PR until Monday! I definitely want to push to get it in for the next release, though.<|||||>@Rocketknight1 happy thanksgiving!
All right, let me rebase it and we wait for the other folks to review it. Thank you for your support :D.<|||||>Should finish to address your review early next week. Stable Diffusion v2 got me into the rabbit hole haha. <|||||>btw @piEsposito if rebasing isn't fixing those tests, don't worry - they're very clearly in files totally untouched by this PR, so I'm happy to merge with them still red! Let me know whenever you're happy with the rest of the PR, and also note that we're planning a branch cut on Tuesday for the release, so this can go into the release if we merge before then!<|||||>@Rocketknight1 this should be finished today if I'm lucky. Thank you for understanding the red test thing.<|||||>@Rocketknight1 by mistake I removed you from the reviews, but I was actually trying to ask you to do one. I'm sorry.
@gante i've addressed your review.<|||||>@Rocketknight1 you can merge it now to keep it on the next release.
I will start trying to replicate this to other BPE tokenizers in the sequence.
Thank you, @gante and @sgugger for the kindness and support. |
transformers | 20,420 | closed | Add BioGPT | # What does this PR do?
Adding BioGPT
Original Implementation and weights - https://github.com/microsoft/BioGPT
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @patrickvonplaten | 11-23-2022 18:18:01 | 11-23-2022 18:18:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@younesbelkada
Done changes according to your suggestions.
Thanks for the review<|||||>Thanks @kamalkraj
Let me give it another round of review and I'll get back to you<|||||>@sgugger
Done changes according to your suggestions.
Thanks for the review<|||||>Hi @kamalkraj
The repo has been moved to microsoft: https://huggingface.co/microsoft/biogpt
Could you please update the PR accordingly? Also it seems that you need to rebase to main
Thanks!<|||||>@younesbelkada Done the changes
Thanks<|||||>thanks @kamalkraj !
It seems that styling tests are failing, could you please run `make fixup`?<|||||>@younesbelkada fixed<|||||>Thanks so much @kamalkraj !
Let's leave it now to @sgugger to give his review ;)
Thanks! |
transformers | 20,419 | closed | Fix device in longformer onnx path | # What does this PR do?
Longformer has a custom path in its `_chunk()` method, in order to be tracable (to some extent) + exportable to ONNX. https://github.com/huggingface/transformers/pull/20292 fixed a bug where this special path was always registering a non-general case:
https://github.com/huggingface/transformers/blob/9ef46659da45f6b605873ca59124d03976990b33/src/transformers/models/longformer/modeling_longformer.py#L785-L787
It seems that the `else` path that should be taken in the export was actually never tested, and notably never tested on GPU. This PR fixes a device assignment.
The test `RUN_SLOW=1 python -m pytest -v tests/onnx/test_onnx_v2.py::OnnxExportTestCaseV2::test_pytorch_export_on_cuda_102_longformer_token_classification` is now running fine.
## Who can review?
@lewtun @ydshieh | 11-23-2022 17:17:41 | 11-23-2022 17:17:41 | gently pinging @sgugger for final approval :)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,418 | closed | `additional_special_tokens` is replaced instead of being updated |
### Reproduction
```
from transformers import T5Tokenizer
from transformers.tokenization_utils import AddedToken
tokenizer = T5Tokenizer.from_pretrained("t5-small", extra_ids=0, additional_special_tokens=["new_token_1"])
print(tokenizer.additional_special_tokens)
print(tokenizer.added_tokens_encoder)
tokenizer.add_special_tokens({"additional_special_tokens": ["new_token_2"]})
print(tokenizer.additional_special_tokens)
print(tokenizer.added_tokens_encoder)
tokenizer.add_special_tokens({"additional_special_tokens": ["new_token_3"]})
print(tokenizer.additional_special_tokens)
print(tokenizer.added_tokens_encoder)
```
gives
```
['new_token_1']
{'new_token_1': 32000}
['new_token_2']
{'new_token_1': 32000, 'new_token_2': 32001}
['new_token_3']
{'new_token_1': 32000, 'new_token_2': 32001, 'new_token_3': 32002}
```
### Expected behavior
We should get
['new_token_1', 'new_token_2', 'new_token_3'] | 11-23-2022 17:03:23 | 11-23-2022 17:03:23 | I also saw this issue, and found it very unintuitive. Should be addressed in V5 imo. Cc @LysandreJik @SaulLu <|||||>To share what has been discussed on slack.
I think the naming is confusing but the current behavior of the method is useful because we need to be able to completely change the list of tokens associated with additional special tokens. Maybe calling this method `set_special_tokens` would be less confusing?
Basically, the difficulty here is that I think it's not necessarily obvious what types of tokens a tokenizer can have.
A tokenizer has a vocabulary (a dictionary mapping tokens to ids) that consists of:
1. The vocabulary learned with a tokenization algorithm (BPE, wordpiece or Unigram)
2. The vocabulary corresponding to the tokens added afterwards and which will be isolated upstream of the tokenization algorithm. They are called `added_tokens`.
Afterwards, some tokens can be special in the sense that they will be associated with a specific role for the model (e.g. the begin of sentence token, the mask token). The additional special tokens list is here to be able to flag more tokens as special for the models that need them [_Even if for the model it is not safe to go and find a special token based on an index in a list but that's another subject_].
<|||||>The problem is that we need some consistency between `additional_special_tokens` and `added_tokens_encoder`. I don't mean we can achieve this 100%, but in this example, if the role of `additional_special_tokens` is to `completely change the list of tokens associated with additional special tokens`, we can't have `added_tokens_encoder` still keep the original added tokens. This doesn't make any sense and can cause to strange errors like the wrong length of the full tokenizer, which is defined as
```python
def __len__(self):
"""
Size of the full vocabulary with the added tokens.
"""
return self.vocab_size + len(self.added_tokens_encoder)
```<|||||>I understood what was bothering you :smile: , that's why in my previous message I tried to show that the notion of "special token" is different from the notion of "added token". So from my point of view, we _don't absolutely need_ some consistency between `additional_special_tokens` and `added_tokens_encoder`.
In the end, what you propose is maybe the desirable behavior of the tokenizer but before making this breaking change I would like to explain why I think that the length of the full tokenizer is not wrong now.
Currently, the approach is that it's not because we change a flag on a token (e.g. removing a token from the `additional_special_tokens` list) that we necessarily want to remove it from the vocabulary. Indeed, if we do so, it means that we can potentially reassign these ids to new tokens, which is not an action without consequences either. From my point of view, if we are willing to change the current behavior we need to answer the following questions: in which situations would a user use the `add_special_tokens` method? Can it be used once the `additional_special_tokens`, `bos_token`, etc are already set? If yes, does the user want to exclude those previous tokens from the vocabulary? Can the effect of the method be error-prone (on the result of the model)?
Finally, what I want to say is that I was told to avoid breaking changes as much as possible in transformers. That's why I spent a lot of time trying to figure out the intention behind the effect of each method. In this case, I think there is a valid reason why these methods act the way they do. Maybe today the usage has changed a lot and this effect has finally more disadvantages than advantages. For this particular discussion, I'm not (yet) convinced that's the case.
---------------
To illustrate why I think that the length of the full tokenizer is not currently wrong.
```python
from transformers import T5Tokenizer
from transformers.tokenization_utils import AddedToken
tokenizer = T5Tokenizer.from_pretrained("t5-small", extra_ids=0, additional_special_tokens=["new_token_1"])
text = "this is a text with new_token_1, new_token_2 and new_token_3 "
print(tokenizer.additional_special_tokens)
print(tokenizer.added_tokens_encoder)
print(tokenizer.convert_ids_to_tokens(tokenizer.encode(text)))
print("***")
tokenizer.add_special_tokens({"additional_special_tokens": ["new_token_2"]})
print(tokenizer.additional_special_tokens)
print(tokenizer.added_tokens_encoder)
print(tokenizer.convert_ids_to_tokens(tokenizer.encode(text)))
print("***")
tokenizer.add_special_tokens({"additional_special_tokens": ["new_token_3"]})
print(tokenizer.additional_special_tokens)
print(tokenizer.added_tokens_encoder)
print(tokenizer.convert_ids_to_tokens(tokenizer.encode(text)))
```
```
['new_token_1']
{'new_token_1': 32000}
['βthis', 'βis', 'β', 'a', 'βtext', 'βwith', 'new_token_1', 'β', ',', 'βnew', '_', 'to', 'ken', '_', '2', 'βand', 'βnew', '_', 'to', 'ken', '_', '3', '</s>']
***
['new_token_2']
{'new_token_1': 32000, 'new_token_2': 32001}
['βthis', 'βis', 'β', 'a', 'βtext', 'βwith', 'new_token_1', 'β', ',', 'new_token_2', 'βand', 'βnew', '_', 'to', 'ken', '_', '3', '</s>']
***
['new_token_3']
{'new_token_1': 32000, 'new_token_2': 32001, 'new_token_3': 32002}
['βthis', 'βis', 'β', 'a', 'βtext', 'βwith', 'new_token_1', 'β', ',', 'new_token_2', 'βand', 'new_token_3', '</s>']
```
The last tokenization shows that `new_token_2` and `new_token_3` still are in the vocabulary even if they are not "special tokens" flagged.
<|||||>Oh! I get your point @SaulLu now ! Thank you for the patience to correct my desire to break everything.
<|||||>> Oh! I get your point @SaulLu now ! Thank you for the patience to correct my desire to break everything.
No worries, actually I took a long time to understand this subtlety of the code and I'm glad that it can be useful! In any case, this discussion shows that tokenizers are not easy to understand and that we can surely improve this aspect!
|
transformers | 20,417 | closed | Add FAN Model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #17234
Implements the FAN Models described in this [paper](https://arxiv.org/pdf/2204.12451.pdf) and available in the following [github repo](https://github.com/NVlabs/FAN), Additionally this repo has some of the weights available as described in their README file.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
This is a cleanup from previous PR #20288 in order to mantain branch integrity, recommendations by @NielsRogge were implemented
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge, @sgugger, @patrickvonplaten
## Additional Request
If this PR gets merged, would it be possible to migrate the model files from [my HF space](https://huggingface.co/ksmcg) to the nvidia space
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-23-2022 15:19:47 | 11-23-2022 15:19:47 | Hi @sgugger thanks for you're feedback. I'll try to implement the changes soon<|||||>Implemented suggestions by @sgugger.
Pending the change on the README.md path, since I'm uncertain if I need to change only the README.md path or the actual doc path.
Also pending rebase<|||||>Thanks for working on this! You need to change the link to the doc in the READMEs as suggested, but not the path to the file. You will also need to rebase/resolve the conflicts.
@NielsRogge could you have a review before I do a final pass?<|||||>I've applied the README.md update and rebased the branch.
<|||||>Hi @NielsRogge, @sgugger.
First of all happy new year, I hope 2023 is greater success than 2022 was for the huggingface team.
I've resolved the merge conflicts, and was hoping to know if any additional steps were required for this PR?
<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20417). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,416 | closed | Fix ModelOutput instantiation when there is only one tuple | # What does this PR do?
This PR fixes a bug discovered by @NielsRogge [here](https://github.com/huggingface/transformers/pull/20407#discussion_r1030465759).
To behave like a dict, `ModelOutput` need to accept a single iterator of key/value pairs as its first argument (otherwise many properties of dictionaries instantiation are left) which caused an issue here since Niels is creating one with a single tuple (but not of key/value pairs). To fix this, added some stronger checks, and new tests. | 11-23-2022 14:45:35 | 11-23-2022 14:45:35 | |
transformers | 20,415 | closed | GPT2LMHeadModel not working with mps device, RuntimeError: tensors must be 2-D | ### System Info
- `transformers` version: 4.24.0
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.9.13
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes, (mps)
- Using distributed or parallel set-up in script?: no
### Who can help?
@patil-suraj, @patrickvonplaten, @LysandreJik
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Load the model and add to `mps` device
```python
from transformers import GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained("gpt2-large").to("mps").eval()
```
Then run the model with some sample random input.
```python
test = torch.randint(0, 100, (1, 10)).to("mps")
predictions = model(input_ids=test)
```
Then an error is thrown
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In [27], line 1
----> 1 predictions = model(input_ids=test)
3 predictions.logits
File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py:1046, in GPT2LMHeadModel.forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1038 r"""
1039 labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1040 Labels for language modeling. Note that the labels **are shifted** inside the model, i.e. you can set
1041 `labels = input_ids` Indices are selected in `[-100, 0, ..., config.vocab_size]` All labels set to `-100`
1042 are ignored (masked), the loss is only computed for labels in `[0, ..., config.vocab_size]`
1043 """
1044 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-> 1046 transformer_outputs = self.transformer(
1047 input_ids,
1048 past_key_values=past_key_values,
1049 attention_mask=attention_mask,
1050 token_type_ids=token_type_ids,
1051 position_ids=position_ids,
1052 head_mask=head_mask,
1053 inputs_embeds=inputs_embeds,
1054 encoder_hidden_states=encoder_hidden_states,
1055 encoder_attention_mask=encoder_attention_mask,
1056 use_cache=use_cache,
1057 output_attentions=output_attentions,
1058 output_hidden_states=output_hidden_states,
1059 return_dict=return_dict,
1060 )
1061 hidden_states = transformer_outputs[0]
1063 # Set device for model parallelism
File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py:889, in GPT2Model.forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions, output_hidden_states, return_dict)
879 outputs = torch.utils.checkpoint.checkpoint(
880 create_custom_forward(block),
881 hidden_states,
(...)
886 encoder_attention_mask,
887 )
888 else:
--> 889 outputs = block(
890 hidden_states,
891 layer_past=layer_past,
892 attention_mask=attention_mask,
893 head_mask=head_mask[i],
894 encoder_hidden_states=encoder_hidden_states,
895 encoder_attention_mask=encoder_attention_mask,
896 use_cache=use_cache,
897 output_attentions=output_attentions,
898 )
900 hidden_states = outputs[0]
901 if use_cache is True:
File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py:389, in GPT2Block.forward(self, hidden_states, layer_past, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions)
387 residual = hidden_states
388 hidden_states = self.ln_1(hidden_states)
--> 389 attn_outputs = self.attn(
390 hidden_states,
391 layer_past=layer_past,
392 attention_mask=attention_mask,
393 head_mask=head_mask,
394 use_cache=use_cache,
395 output_attentions=output_attentions,
396 )
397 attn_output = attn_outputs[0] # output_attn: a, present, (attentions)
398 outputs = attn_outputs[1:]
File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py:311, in GPT2Attention.forward(self, hidden_states, layer_past, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, use_cache, output_attentions)
309 attention_mask = encoder_attention_mask
310 else:
--> 311 query, key, value = self.c_attn(hidden_states).split(self.split_size, dim=2)
313 query = self._split_heads(query, self.num_heads, self.head_dim)
314 key = self._split_heads(key, self.num_heads, self.head_dim)
File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/torch/nn/modules/module.py:1130, in Module._call_impl(self, *input, **kwargs)
1126 # If we don't have any hooks, we want to skip the rest of the logic in
1127 # this function, and just call forward.
1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1129 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130 return forward_call(*input, **kwargs)
1131 # Do not call functions when jit is used
1132 full_backward_hooks, non_full_backward_hooks = [], []
File ~/opt/miniconda3/envs/torch/lib/python3.9/site-packages/transformers/pytorch_utils.py:112, in Conv1D.forward(self, x)
110 def forward(self, x):
111 size_out = x.size()[:-1] + (self.nf,)
--> 112 x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight)
113 x = x.view(size_out)
114 return x
RuntimeError: tensors must be 2-D
```
### Expected behavior
If we change the model to device `cuda` or `cpu`:
```python
from transformers import GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained("gpt2-large").to("cpu").eval()
test = torch.randint(0, 100, (1, 10)).to("cpu")
predictions = model(input_ids=test)
```
This works and produces a prediction vector. | 11-23-2022 14:28:55 | 11-23-2022 14:28:55 | Looks like this comes from an op not implemented in PyTorch. I'd try upgrading PyTorch (even to the nigthlies) as support for MPS is still in progress on their side.<|||||>Thanks @sgugger, sorry for wasting your time. I had pytorch 1.12.1 and 1.13 worked fine. |
transformers | 20,414 | closed | ASR Pipeline is not super user-friendly | ### Feature request
Firstly, thank you to @Narsil for developing a the speech recognition pipeline - it's incredibly helpful for running the full speech-to-text mapping in one call, pre and post-processing included.
There are a couple of things that currently mean the pipeline is not super compatible with π€ Datasets. I'll motivate them below with an example.
### Motivation
Let's take the example of evaluating a (dummy) Wav2Vec2 checkpoint on the (dummy) LibriSpeech ASR dataset:
```python
from transformers import pipeline
from datasets import load_dataset
pipe = pipeline("automatic-speech-recognition", model="hf-internal-testing/tiny-random-wav2vec2")
dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation[:10]")
```
Printing the first audio sample of the dataset:
```python
print(dataset[0]["audio"])
```
**Print Output:**
```
{'path': '/home/sanchit_huggingface_co/.cache/huggingface/datasets/downloads/extracted/0393f71a8093c6541f95c89f60982213cf086569876e1195926741f097ad47fc/dev_clean/1272/128104/1272-128104-0000.flac',
'array': array([0.00238037, 0.0020752 , 0.00198364, ..., 0.00042725, 0.00057983,
0.0010376 ], dtype=float32),
'sampling_rate': 16000}
```
So the audio's are in the format: `{"path": str, "array": np.array, "sampling_rate": int}`. The np audio array values are stored under the key "array". This format is ubiquitous across audio datasets in π€ Datasets: all audio datasets take this format.
However, pipeline expects the audio samples in the format `{"sampling_rate": int, "raw": np.array}`:
https://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/pipelines/automatic_speech_recognition.py#L209-L211
This means we have to do some hacking around to get the audio samples into the right format for pipeline:
```python
def predict(batch):
audios = batch["audio"]
#Β hacky renaming
audios = [{"raw": sample["array"], "sampling_rate": sample["sampling_rate"]} for sample in audios]
predictions = pipe(audios)
# unpack and index predictions (List[Dict])
batch["predictions"] = [pred["text"] for pred in predictions]
return batch
```
And then apply the function to our dataset using the `map` method:
```python
batch_size = 4
result_set = dataset.map(
predict,
batched=True,
batch_size=batch_size,
remove_columns=dataset.features.keys(),
)
```
If pipeline's `__call__` method was matched to Datasets' audio features, we'd be able to use any audio dataset **directly** with pipeline (no hacky feature renaming):
```python
def hypothetical_predict(batch):
predictions = pipe(audios)
batch["predictions"] = [pred["text"] for pred in predictions]
return batch
```
This would be very nice for the user!
Furthermore, the outputs returned by pipeline are a list of dicts (`List[Dict]`):
https://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/pipelines/automatic_speech_recognition.py#L477
This means we have to unpack and index them before we can use them for any downstream use (such as WER calculations).
It would be nice if pipeline returned a [`ModelOutput`](https://github.com/huggingface/transformers/blob/1c6309bf79c76b45de2266c586caccbfbc8ef958/src/transformers/utils/generic.py#L190) class. That way, we could index the text column directly from the returned object:
```python
def hypothetical_predict(batch):
batch["predictions"] = pipe(batch["audio"]).text
return batch
```
IMO this is more intuitive to the user than renaming their audio column and then iterating over the returned Dict object to get the predicted text.
### Your contribution
WDYT @Narsil @patrickvonplaten? Happy to add these changes to smooth out the user experience! | 11-23-2022 12:56:25 | 11-23-2022 12:56:25 | One additional point! We can't pass generation kwargs to the `generate` method:
https://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/pipelines/automatic_speech_recognition.py#L369-L372
This means our stdout is bombarded with UserWarnings from the `generate` method:
```
/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py:1364: UserWarning: Neither `max_length` nor `max_new_tokens` has been set, `max_length` will default to 448 (`self.config.max_length`). Controlling `max_length` via the config is deprecated and `max_length` will be removed from the config in v5 of Transformers -- we recommend using `max_new_tokens` to control the maximum length of the generation.
```
Would be nice to be able to override generation kwargs to prevent these messages and have flexibility over max length, beams, temperature, length penalty, etc
cc @Vaibhavs10 <|||||>Just went through the code in more-detail and found that "array" is pop'd from the input dict!
https://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/pipelines/automatic_speech_recognition.py#L278-L280
Maybe we can add this to the docstring to highlight!<|||||>Multiple points:
> However, pipeline expects the audio samples in the format
as far as I remember we can also accept `array` for that reason. (`raw` came before `datasets` had normalized iirc so that's the reason for the discrepancy, but since we don't break, neither is going to go away in pipeline I'm afraid.
The problem is not `array` it's `audio`. See more in the docs about `KeyDataset` (or the iterator which I think is more elegant, but it lacks the number of items) : https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline
> It would be nice if pipeline returned a [ModelOutput](https://github.com/huggingface/transformers/blob/1c6309bf79c76b45de2266c586caccbfbc8ef958/src/transformers/utils/generic.py#L190) class. That way, we could index the text column directly from the returned object:
This is not going to happen for reasons I'll explain in following points
> We can't pass generation kwargs to the generate method:
We can add it as a `generate_kwargs` but I think we wanted to change the configs instead of the affected model (which were not defined for whisper I think) @ArthurZucker . If `max_length` is the actual maximal capacity of the model, everything should be fine, no warnings no nothing.
We could also make the warning appear only once. @sgugger since reducing noise seems something desirable.
Being able to send `generate_kwargs` would still be nice. (Careful I'm meaning `pipe(..., generate_kwargs={"max_new_tokens":20})` NOT `pipe(...., max_new_tokens=20)` the reason is because generate has clashed in the past with tokenizer kwargs for instance and it's impossible to distentangle after the fact. That's for passing generic kwargs (all of them through time and eternity), but we can definitely add some first class parameters (like `max_new_tokens` for instance).
> Maybe we can add this to the docstring to highlight!
Totally !
> dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation[:10]")
I highly recommend NOT loading the entire array of the datasets in memory when working on datasets. That means NOT passing around lists, and not being able to batch with `ModelOutputs`.
That because objects are meant to be consumed one by one in an iterable fashion.
This is true for datasets, but also for webservers, you can have pretty much the same code, do dynamic batching and such crazy stuff and still keep the code the same for instance.
This is not relevant for `dataset.map` since it does the slicing and batching on its own, but it is relevant when `pipe.preprocess` can leverage the streaming mode to compute multiple things at once.
Using generator and streams is much more efficient (and the pipeline will actually do the batching too, passing around lists to the pipeline will NOT batch things. ( More on batching in pipelines : https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching)
Here is the recommendation from the docs: https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.pipeline (Still need to upgrade that part to make it to the tutorial).
Here is a gist of few several examples: https://gist.github.com/Narsil/4f5b088f4dd23200d16dd2cc575fdc16
```python
Method 1 (pipe) 0:00:00.294485
Method 2 (dataset) 0:00:00.308238
Method 3 (raw file) 0:00:00.635527
```
The 5% speedup is pretty consistent on this smallish data.
Method 3 is slower, but because you don't need to decode the audio files within the dataset, this can save some disk space (at a compute cost). Keep in mind the `num_workers=1` means the actual decompression of audio files happens in a different thread (and even process since we're relying on ffmpeg for it).
I tried actually batching inputs, but it seems it's detrimental in this case (just add `, batch_size=2` during pipeline initialization).
Method 1 is 20% faster than method 2 with actual batching, but 50% slower than without :
https://huggingface.co/docs/transformers/main_classes/pipelines#pipeline-batching for more info on why batching can hurt.
I had to add a "warmup" to do fair comparisons, it seems `dataset` is decompressing the flies on first access (it's my best guess) and it seems to do it slower than the raw pipeline (it's because of the threading and because librosa is actually slower that raw ffmpeg, I think, at least I remember it was "slow" to decompress).
Happy to discuss further how to make the integration easier. I should mention that `KeyDataset` is probably the nicest to use as it should keep the length, it's just one weird import away
```python
from transformers.pipelines.pt_utils import KeyDataset
...
for out in pipe(KeyDataset(dataset, "audio")):
pass
```
It has the same performance as method1 but plays better with `tqdm`. It's just less flexible imo.
<|||||>Thanks for the super in-depth explanation, @Narsil! Incredibly helpful and much appreciated π€
Maybe I'm missing the point a bit with why pipelines exist - are they geared more towards maximising performance for inference (or at least giving you the option to)? Rather than just being a nice wrapper around the feature extractor, model and tokenizer?
Sounds good regarding:
1. Updating the doc string to reflect the fact that we can pass `array` as well as `raw` as the keys for audio input
2. Passing the **gen kwargs** as a specified dict to the generate method
Thanks for explaining why `ModelOutputs` is not viable! It makes sense using a generator and streams, rather than throwing a list into pipe.
> (Still need to upgrade that part to make it to the tutorial).
Is there a tutorial that's been published or a WIP? That'll be super handy!
> Here is a gist of few several examples: https://gist.github.com/Narsil/4f5b088f4dd23200d16dd2cc575fdc16
Super comprehensive, thanks for these benchmarks! Interesting to see how `.map` compares to the generator method!
> I should mention that KeyDataset is probably the nicest to use as it should keep the length, it's just one weird import away
Thanks for flagging this! I had a follow-up question - are there docs / examples for using pipe when loading a dataset in streaming mode? Here, we can't use KeyDataset (as we can't index a streamed dataset):
https://github.com/huggingface/transformers/blob/afce73bd9d891b55dcb8d4d875d17718ffa01ff0/src/transformers/pipelines/pt_utils.py#L305
Is the best option just to go for a generator here?
```python
def data():
for i, sample in enumerate(dataset):
yield sample["audio"]
output = []
for out in pipe(data(), batch_size=2):
output.append(out["text"])
```
With this generator method, we currently `yield` the audio samples which we pass to the pipe. Is there a way of iterating over the streaming dataset to get the target transcriptions (`sample["text"]`) as well? Here, we would not need to pass the target text to the pipe, but simply return it in the generator. Ideally, want the target transcriptions `sample["text"]` so that we can assess our predictions.
(this is the actual example I'm working with: https://github.com/sanchit-gandhi/codesnippets/blob/main/benchmark_inference_whisper.ipynb)<|||||>> Thanks for the super in-depth explanation, @Narsil! Incredibly helpful and much appreciated hugs
Well you initial issue was also pretty comprehensive, so thanks for creating it.
> Maybe I'm missing the point a bit with why pipelines exist - are they geared more towards maximising performance for inference (or at least giving you the option to)? Rather than just being a nice wrapper around the feature extractor, model and tokenizer?
Pipeline started without any real guidelines into what they should or should not do.
Currently the first and foremost goal is to **make ML accessible for users who have no clue what is a model or tensors**, it's the primary target.
That being said, being efficient for inference goes along since we don't want to provide a 10x slowdown experience for those users.
It's not the primary focus though, otherwise it would not be written in python, and it would not be that convenient :).
Let's say there are 2 kinds of performance:
- Don't do useless work (Remove crap code, or code which is not really useful, or work that's discarded, useless copies etc..)
- Actual performance by making sure every inch of your hardware is properly used at the appropriate time. (Read understanding CPU instructions, looking a SIMD, optimizing threading layout, maximizing L1 cache hits, minimizing branching predictions, using custom GPU kernels, etc..)
We're only doing the first kind here. (Maybe a little of 2 for the GPU feeding that needs to be as fast as possible because CPU-GPU is a bottleneck really quick otherwise)
> Is there a tutorial that's been published or a WIP? That'll be super handy!
There this tutorial https://huggingface.co/docs/transformers/pipeline_tutorial which I find less comprehensive than this https://huggingface.co/docs/transformers/main_classes/pipelines unfortunatly.
I'm in the process of rewriting it, as it seems most people read only that. And you're not the first person to not be aware of those cool features, so I'd say it's a doc problem.
> Super comprehensive, thanks for these benchmarks! Interesting to see how .map compares to the generator method!
Can't tell you why there is a difference, but I can tell you I went to great length to optimize everything I could in the pipeline directly. (Only the first kind of optimization, and it's still written in Python so far from perfect but hey ... :) )
> With this generator method, we currently yield the audio samples which we pass to the pipe. Is there a way of iterating over the streaming dataset to get the target transcriptions (sample["text"]) as well?
Actually if you pass along other keys in your data, they should be passed along all the way to the result with the asr pipeline.
I would like to be the case for all pipelines, but never got down to doing it.
But since it is streaming, yes you need to pass things around since otherwise it's tricky to start matching results with inputs at the end.
```python
def data():
for item in streaming_data:
yield {**item["audio"], "expected": item["text"]}
for out in pipe(data()):
generated = out["text"]
expected = out["expected"]
# Do you WER thing.
```
Would that work ? (I haven't tested this)
If it wasn't you could do
Something like that might be a useful hack though (Provided you're running in a single thread for the server looping).
```python
GLOBAL_INDEX = {}
def data():
for i, item in enumerate(streaming_data):
GLOBAL_INDEX[i] = item["text"]
yield item
for i, out in enumerate(pipe(data())):
generated = out["text"]
expected = GLOBAL_INDEX.pop(i) # Pop will remove it enabling releasing memory
# Do you WER thing.
```
<|||||>Thank you again for the super comprehensive reply, really appreciate the time given to answering this thread!
> make ML accessible for users who have no clue what is a model or tensors
Awesome! Think it's fantastic in this regard. Having some easy examples that show you how to run pipeline in different scenarios / tasks like a little 'recipe' book would be great to further this.
> otherwise it would not be written in python, and it would not be that convenient :)
Did someone say Rust π
Thanks for linking the tutorials - I learnt quite a lot from this thread + docs after knowing where to look. I guess you have two camps of people that will be using pipeline:
1. Those migrating from the transformers approach (feature extractor + model + processor)
2. Those who don't use transformers
For me, it was making the link between my transformers approach and pipeline that made the penny drop. There's a bit of a different mindset which you have to adopt vs the usual datasets `.map` method. I think some more examples showing how to make actual transformers tasks work in pipeline would go a long way! In this regard, your updated tutorial looks amazing (doing exactly this)! Happy to do a pass of the PR when it's in a review ready state!
> Would that work ? (I haven't tested this)
It did indeed work, thanks π<|||||>I think we should definitely try to avoid by default displaying warnings when running the ASRPipeline.
Also, since Whisper is a Encoder-Decoder model architecture the main use case for speech recognition might soon switch from Wav2Vec2CTC to Encoder-Decoder => thus we should also try to adapt the ASR pipeline into this direction.
**Short term:**
Let's try to not display any warnings by default & I agree with @sanchit-gandhi - it'd also be nice to pipelines to be directly used in combination with datasets. Could we maybe adapt the pipeline API from:
```
{"sampling_rate": int, "raw": np.array}
```
to
```
{"sampling_rate": int, "raw": Optional[np.array], "array": Optional[np.array]}
```
to just allow both use cases? What is the big drawback of this?
**Mid/Long term**
As discussed with @sgugger and @sanchit-gandhi already a bit, I think we should really think about creating a new generate method just for audio. The current `generate` method is a) too bloated and b) just not adapted for speech recognition. Chunking, long-range audio recognition, streamed audio recognition are much more of a required use case for speech recognition then for NLP. Also we could design the new generate method to be future compatible with models like the Transducer.
This would then also render the ASR pipeline much easier IMO. <|||||>> What is the big drawback of this?
This is already done, it's a doc issue. And specifically for sanchit, datasets are using `{"audio" : {"sampling_rate": .., "audio": ..}}` instead of the inner dict.
> The current generate method is a) too bloated and b) just not adapted for speech recognition.
True, I have potential suggestions for it, which mainly are going full on Processor/StoppingCriteria route. This is what was necessary to enable complex batching within bloom inference.
Splitting specifically for audio might be necessary but I am under the impression it's only a matter of defaults for those objects.<|||||>Maybe a bigger discussion, but could it make sense to move some more complicated tasks such as real-time speech recognition to something like: https://github.com/huggingface/speechbox ? <|||||>For cases like realtime ASR more optimized methods, for example as rust modules, would be super cool.
Maybe with functionality for community pipelines as in diffusers, just for speech ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,413 | closed | DeBERTa-v2's build_relative_position method initializes tensor on cpu and costs much time | ### System Info
- `transformers` version: 4.24.0
- Platform: Linux-4.14.105-1-tlinux3-0013-x86_64-with-glibc2.10
- Python version: 3.8.3
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.10.1+cu113 (True)
- Tensorflow version (GPU?): 2.10.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@LysandreJik
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
hello, I am using DeBERTa-v2 in my code, and it runs slowly.
With `torch.profile`, I find `Self CPU time`(more than 1.7s) is much larger than `Self CUDA time`(about 30ms). And most CPU time is from function `build_relative_position`, where two tensors are initialized without specifying their `device`. So the two tensors and the following codes are running on cpu, and it costs much time.
https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta_v2/modeling_deberta_v2.py#592:~:text=q_ids%20%3D%20torch,%2C%20key_size)


My test script is a simple forward process with `torch.profile` to get time cost.
```
import torch
from torch.profiler import profile, record_function, ProfilerActivity
from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline, BertTokenizer, AutoConfig, AutoModel, DebertaV2ForMaskedLM
model_path =
data_path =
tokenizer=AutoTokenizer.from_pretrained(model_path)
model = AutoModel.from_pretrained(model_path).to("cuda")
data = open(data_path).readlines()
max_length, batch_size = 256, 16
with torch.no_grad():
for step in range(10):
L = data[step * batch_size: step * batch_size + batch_size]
batch_inputs = {i: torch.tensor(j).to('cuda') for (i, j) in
tokenizer(L, max_length=max_length, padding='max_length').items()}
with profile(
activities=[ProfilerActivity.CUDA, ProfilerActivity.CPU],
with_stack=True,
) as prof:
outputs = model(**batch_inputs)
print(prof.key_averages(group_by_stack_n=15, group_by_input_shape=True).table(sort_by="self_cpu_time_total", row_limit=2))
```
### Expected behavior
I hope the tensors in function `build_relative_position` are on the proper device(cuda if accessible) when initialized, instead of making the results `to(xxx.device)` after some computation.
In my modified code where the two tensors are initialized on cuda, the cost of cpu time will be reduced to about 77ms from 1s on my machine.

 | 11-23-2022 12:38:14 | 11-23-2022 12:38:14 | We'd be happy to review a PR if you want to fix this :-)<|||||>> We'd be happy to review a PR if you want to fix this :-)
Thanks for replying, I've created a PR for this issue :-) |
transformers | 20,412 | closed | Add run_mim_no_trainer.py example script | # What does this PR do?
Adds a no_trainer example script for image pretraining.
Relates to https://github.com/huggingface/transformers/issues/20053
@NielsRogge, It is still incomplete but could you please have a look at the code so far to see if something needs to be changed :-)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-23-2022 12:11:17 | 11-23-2022 12:11:17 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20412). All of your documentation changes will be reflected on that endpoint.<|||||>@NielsRogge can I get some feedback on the code so far. Thanks :-). Could you also tell me if there are any specific tests that I need to run for this case.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge friendly ping on this PR.<|||||>Hi @Saad135 the initial draft looks great already!
Let me know if you need any help finishing this PR.<|||||>Hello @NielsRogge, thank you for checking out the draft. I would appreciate if you could point me towards the next steps I should take. I mean, should I make the draft ready for review or should I run some specific tests or maybe something else? I am still quite new to OS contributions, so the next step might be a very simple one which is not apparent to me right now.<|||||>I'd recommend running both the trainer.py and no_trainer.py scripts on a small dataset and see if they progress similarly.
Once that's done, you can mark the PR as ready for review<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Cc @amyeroberts, this PR should be ready for review.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@Saad135 - any updates on this PR? Once feature extractor references have been updated to image processor and the check Niels suggested have been done it should be ready to merge :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@amyeroberts This PR seems to be have been stalled and closed due to inactivity. I have taken it over in PR #23156 to complete it. |
transformers | 20,411 | closed | `accelerate` support for `OwlViT` | # What does this PR do?
This PR adds `accelerate` support for `OwlViT` so that any model from this family can be loaded in `8-bit`! Here is a small snippet on how to load and run the model in `8-bit` (based on the snippet from the model card):
```
# pip install accelerate bitsandbytes
import requests
from PIL import Image
import torch
from transformers import OwlViTProcessor, OwlViTForObjectDetection
processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32")
model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32", device_map="auto", load_in_8bit=True)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
texts = [["a photo of a cat", "a photo of a dog"]]
inputs = processor(text=texts, images=image, return_tensors="pt")
outputs = model(**inputs)
# Target image sizes (height, width) to rescale box predictions [batch_size, 2]
target_sizes = torch.Tensor([image.size[::-1]])
# Convert outputs (bounding boxes and class logits) to COCO API
results = processor.post_process(outputs=outputs, target_sizes=target_sizes)
i = 0 # Retrieve predictions for the first image for the corresponding text queries
text = texts[i]
boxes, scores, labels = results[i]["boxes"], results[i]["scores"], results[i]["labels"]
# Print detected objects and rescaled box coordinates
score_threshold = 0.1
for box, score, label in zip(boxes, scores, labels):
box = [round(i, 2) for i in box.tolist()]
if score >= score_threshold:
print(f"Detected {text[label]} with confidence {round(score.item(), 3)} at location {box}")
>>> Detected a photo of a cat with confidence 0.705 at location [321.25, 19.8, 643.12, 376.88]
>>> Detected a photo of a cat with confidence 0.729 at location [0.94, 53.55, 319.69, 473.91]
```
Added also a slow test to make sure users can run inference in `fp16` (`8bit` converts the model in. `fp16` under the hood)
All slow tests pass except the one mentioned in #20410 that should be fixed once merged
cc @alaradirik @sgugger @NielsRogge | 11-23-2022 12:05:59 | 11-23-2022 12:05:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks everyone! |
transformers | 20,410 | closed | [OWL VIT] make daily CI happy | # What does this PR do?
Fixes a slow test for OWLViT while trying to integrate `accelerate` support for this model!
cc @ydshieh @alaradirik
Slow test that was failing: `tests/models/owlvit/test_modeling_owlvit.py::OwlViTModelIntegrationTest::test_inference_one_shot_object_detection`
all slow tests pass!
| 11-23-2022 11:29:20 | 11-23-2022 11:29:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @ydshieh , just updated the description!
Yes without this change, the test would fail on GPU (`accelerate` tests excluded) |
transformers | 20,409 | closed | [BNB] Throw `ValueError` when trying to cast or assign | # What does this PR do?
I have seen several piece of code where users try to assign `8-bit` loaded models into a new device and/or `dtype` this PR aims to throw an error to the users that perform this.
Also added a set of slow tests
cc @sgugger @ydshieh
Do not merge before #20408
| 11-23-2022 11:15:38 | 11-23-2022 11:15:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>What happens if we don't have the changes in this PR and users try to do assign 8-bit loaded models into a new device and/or dtype?<|||||>Users can face unexpected behaviors such as: https://github.com/huggingface/transformers/issues/20361#issuecomment-1324113579 |
transformers | 20,408 | closed | [BNB] fix nasty `bnb` bug | # What does this PR do?
This PR fixes a very nasty bug that can be reproduced with the following script:
```
from transformers import AutoModel
model_16bit = AutoModel.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=False)
model_8bit = AutoModel.from_pretrained("bigscience/bloom-560m", device_map="auto", load_in_8bit=True)
print(model_16bit.is_loaded_in_8bit)
>>> True
```
In fact we should assign the attribute `is_loaded_in_8bit` to the variable `model` instead of `cls`. Otherwise `is_loaded_in_8bit` will be overriden by the next model that is loaded in 8bit
@sgugger @ydshieh | 11-23-2022 11:15:14 | 11-23-2022 11:15:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,407 | closed | [AutoBackbone] Improve API | # What does this PR do?
As backbones themselves have hidden states and optional attentions, this PR adds them to the `BackboneOutput`.
This way, frameworks that leverage backbones can return hidden states/attentions of the backbone if the user specifies `output_hidden_states=True` or `output_attentions=True`.
To do:
- [x] perhaps we should test backbones with all common tests | 11-23-2022 11:10:32 | 11-23-2022 11:10:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Pinging @ydshieh on this PR as I made 2 updates to test_modeling_common.py to support models which output a tuple of tensors as their first output. I've updated `test_determinism` and `test_save_load` to make them more general.<|||||>@michaelbenayoun as seen on the CI, the torch fx tests fail because backbones aren't supported yet.
Could you add support for them in a separate PR?<|||||>> Pinging @ydshieh on this PR as I made 2 updates to test_modeling_common.py to support models which output a tuple of tensors as their first output. I've updated `test_determinism` and `test_save_load` to make them more general.
Looks good for me regarding the tests.
I would personally put the check `isinstance(first, tuple)` inside the `check_xxx` methods, and call them recursively.
This way we don't need to worry if each element in the list/tuple would contain list/tuple. But no obligation for now.<|||||>@ydshieh for some reason the CI isn't run when I push new commits, do you know why?<|||||>Not really, try again with a new empty commit?
git commit --allow-empty -m "empty commit to trigger CI"
<|||||>@ydshieh Thanks a lot! @sgugger feel free to approve :) |
transformers | 20,406 | closed | [Image Transformers] to_pil fix float edge cases | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a quite nasty type checking bug: https://github.com/huggingface/transformers/issues/20394
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-23-2022 10:43:51 | 11-23-2022 10:43:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,405 | closed | Correct rescale | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-23-2022 10:40:52 | 11-23-2022 10:40:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20405). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,404 | closed | Fail when using FeatureExtractionPipeline for the inference of t5-small | ### System Info
- transformers version: '4.24.0'
- platform: Ubuntu 20.04.5 LTS
- Python version: 3.8.10
- Huggingface_hub version: '0.10.1'
- torch version: '1.13.0+cu117'
### Who can help?
@patrickvonplaten
@Narsil
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am comparing the embeddings of several very popular HF models (gpt2, bert-base-cased, etc).
Right now, I want to use the FeatureExtractionPipeline to extract the embeddings of the T5 (small) model.
```
from transformers import pipeline
feature_extraction = pipeline('feature-extraction', model="t5-small")
sentence = "This is John"
embeddings = feature_extraction(sentence)
```
Error message:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/pipelines/feature_extraction.py", line 92, in __call__
return super().__call__(*args, **kwargs)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1074, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1081, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/pipelines/base.py", line 990, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/pipelines/feature_extraction.py", line 70, in _forward
model_outputs = self.model(**model_inputs)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1435, in forward
decoder_outputs = self.decoder(
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 937, in forward
raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds")
ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
```
This error occurs at well when using the model at `google/flan-t5-base`. However, when trying the model at `google/mt5-small`, an error arises when loading the pipeline (before trying to us it):
```
>>> feature_extraction = pipeline('feature-extraction', model="google/mt5-small")
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 553/553 [00:00<00:00, 393kB/s]
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1.20G/1.20G [01:09<00:00, 17.3MB/s]
Some weights of the model checkpoint at google/mt5-small were not used when initializing MT5Model: ['lm_head.weight']
- This IS expected if you are initializing MT5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing MT5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 82.0/82.0 [00:00<00:00, 44.2kB/s]
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.31M/4.31M [00:05<00:00, 721kB/s]
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 99.0/99.0 [00:00<00:00, 4.37kB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/pipelines/__init__.py", line 801, in pipeline
tokenizer = AutoTokenizer.from_pretrained(
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 619, in from_pretrained
return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1777, in from_pretrained
return cls._from_pretrained(
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1932, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/models/t5/tokenization_t5_fast.py", line 134, in __init__
super().__init__(
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/venv-tests/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 120, in __init__
raise ValueError(
ValueError: Couldn't instantiate the backend tokenizer from one of:
(1) a `tokenizers` library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
```
Is this pipeline not yet prepared for T5 models?
### Expected behavior
I would expect the typical output of the pipeline (a list with a list of embeddings). | 11-23-2022 10:23:46 | 11-23-2022 10:23:46 | Do you have `sentencepiece` installed ?
`pip install sentencepiece` should fix it (as suggests the last line of the error).<|||||>Thanks for the reply.
Installing `sentencepiece` could solve the error when loading `google/mt5-small`.
However, the error when using the other T5 models (`t5-small` and `google/flan-t5-base`) is not related to `sentencepiece`. <|||||>```
(1) a `tokenizers` library serialization file,
(2) a slow tokenizer instance to convert or
(3) an equivalent slow tokenizer class to instantiate and convert.
You need to have sentencepiece installed to convert a slow tokenizer to a fast one.
```
Either you don't have `tokenizers` installed, or `sentencepiece`. It works perfectly in my environement.
Can you try this:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small") # This will use `tokenizers` but there is a warning about byte-fallback
# OR
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small", use_fast=False) # This will use sentencepiece
```<|||||>Thank you for the response.
I have checked and both `tokenizers` and `sentencepiece` are now installed in my environment.
```
$ python -c "import sentencepiece"
$ python -c "import tokenizers"
```
Additionally, all these tests work:
1. Initializing the tokenizers
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small") # This will use `tokenizers` but there is a warning about byte-fallback
# OR
tokenizer = AutoTokenizer.from_pretrained("google/mt5-small", use_fast=False) # This will use sentencepiece
```
2. Executing the fast and slow tokenizers :
```
sentence = "This is John"
tokenizer(sentence)
{'input_ids': [1494, 339, 4040, 1], 'attention_mask': [1, 1, 1, 1]}
```
3. Initializing the feature extraction pipeline with models `google/mt5-small`, `t5-small` and `google/flan-t5-base`.
However, **the error arises when executing the feature extraction pipeline** with the models `google/mt5-small`, `t5-small` and `google/flan-t5-base`:
```
from transformers import pipeline
feature_extraction = pipeline('feature-extraction', model="google/mt5-small")
sentence = "This is John."
feature_extraction(sentence) # This line breaks
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/pipelines/feature_extraction.py", line 92, in __call__
return super().__call__(*args, **kwargs)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1074, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1081, in run_single
model_outputs = self.forward(model_inputs, **forward_params)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/pipelines/base.py", line 990, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/pipelines/feature_extraction.py", line 70, in _forward
model_outputs = self.model(**model_inputs)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1435, in forward
decoder_outputs = self.decoder(
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
return forward_call(*input, **kwargs)
File "/home/antonio/Documents/Work/LHF/side-projects/compliance/compliance-analyzer/venv/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 937, in forward
raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds")
ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
```
PS: In case it is relevant, my protobuf version is 3.19.1 (I checked it via `$ protoc --version` and via `$ python -m pip show protobuf`)<|||||>Hello, yes `feature-extraction` doesn't really mean much for a encoder-decoder model.
You could use just the encoder, which is probably the closest you want. Keep in mind that models that were not intended for feature extraction will not necessarily be really good at it.
```
pipe.model = pipe.model.encoder # This might depend on a model per model basis, but I think this is what you are looking for, you want just the encoder part of the model.
```<|||||>Thank you for the hack! Closing this issue :) |
transformers | 20,403 | closed | SwiGLU activation function | ### Feature request
Since it has been recently used in [PaLM](https://arxiv.org/abs/2204.02311) and several papers report its better performance, it would be good to have access to a [SwiGLU](https://arxiv.org/pdf/2002.05202v1.pdf) implementation as an activation function.
### Motivation
I am building a biomedical RoBERTa-based model with specific biomedical vocabulary. It could be seen as a PubMedBERT version wirth RoBERTa architecture and BPE vocab.
Since RoBERTa has already some years, I want to also add recent improvements to architecture and training.
I have tried myself to generate a RoBERTa model with two extra features. One is to remove bias from the FFN layers and the other to add the SwiGLU activation to these.
My approach has been to copy the code of [roberta_modeling.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/roberta/modeling_roberta.py) and modify its `RobertaIntermediate` class to a `EXcellRobertaIntermediate` class including the `swiglu` activation and a bias=`config.dense_layer_bias` attribute in the `nn.Linear` instantiation.
This works good for a first training of the model. However, when loading the model I find problems.
The first problem was that the model config has `activation=swiglu` and there is some ContextManager that does not allow for that option. I did a dirty work around, keeping `activation=gelu` while keeping the swiglu in the code. This works and the model trains... but if I want to then further train it or use it for fine-tuning it will drop the extra layers generated by the swiglu. Here is an example output:
```
from smtag.excell_roberta.modeling_excell_roberta import EXcellRobertaForMaskedLM
model = EXcellRobertaForMaskedLM.from_pretrained('/app/excell-roberta-training/checkpoint-50/')
loading configuration file /app/excell-roberta-training/checkpoint-50/config.json
Model config EXcellRobertaConfig {
"architectures": [
"EXcellRobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bias_dense_layers": false,
"bias_norm": false,
"bos_token_id": 0,
"classifier_dropout": null,
"dense_layer_bias": false,
"eos_token_id": 1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 3,
"position_embedding_type": "absolute",
"sep_token_id": 1,
"swiglu": true,
"tokenizer_class": "RobertaTokenizerFast",
"torch_dtype": "float32",
"transformers_version": "4.20.0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 64000
}
loading weights file /app/excell-roberta-training/checkpoint-50/pytorch_model.bin
Some weights of the model checkpoint at /app/excell-roberta-training/checkpoint-50/ were not used when initializing EXcellRobertaForMaskedLM: ['roberta.encoder.layer.2.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.0.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.3.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.11.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.8.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.7.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.9.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.5.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.6.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.4.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.1.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.10.intermediate.intermediate_dense.weight']
- This IS expected if you are initializing EXcellRobertaForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing EXcellRobertaForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of EXcellRobertaForMaskedLM were initialized from the model checkpoint at /app/excell-roberta-training/checkpoint-50/.
If your task is similar to the task the model of the checkpoint was trained on, you can already use EXcellRobertaForMaskedLM for predictions without further training.
model(**excell("acetyltransferase is something that should give extra subtokens to the tokenizer", truncation=True, padding="max_length", return_tensors='pt'))
MaskedLMOutput(loss=None, logits=tensor([[[-0.1479, 0.3992, -0.3396, ..., -0.3373, -0.8730, -0.7037],
[ 0.1812, 0.5421, -0.4052, ..., -0.0612, -0.6076, -1.0300],
[-0.1578, 0.6487, -0.8400, ..., 0.0745, -0.6941, -0.7082],
...,
[-0.2610, 0.6921, -0.6040, ..., -0.0400, -0.6101, -0.9326],
[-0.2610, 0.6921, -0.6040, ..., -0.0400, -0.6101, -0.9326],
[-0.2610, 0.6921, -0.6040, ..., -0.0400, -0.6101, -0.9326]]],
grad_fn=<AddBackward0>), hidden_states=None, attentions=None)
model = EXcellRobertaForMaskedLM.from_pretrained('/app/excell-roberta-training/checkpoint-50/')
loading configuration file /app/excell-roberta-training/checkpoint-50/config.json
Model config EXcellRobertaConfig {
"architectures": [
"EXcellRobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bias_dense_layers": false,
"bias_norm": false,
"bos_token_id": 0,
"classifier_dropout": null,
"dense_layer_bias": false,
"eos_token_id": 1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 3,
"position_embedding_type": "absolute",
"sep_token_id": 1,
"swiglu": true,
"tokenizer_class": "RobertaTokenizerFast",
"torch_dtype": "float32",
"transformers_version": "4.20.0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 64000
}
loading weights file /app/excell-roberta-training/checkpoint-50/pytorch_model.bin
Some weights of the model checkpoint at /app/excell-roberta-training/checkpoint-50/ were not used when initializing EXcellRobertaForMaskedLM: ['roberta.encoder.layer.2.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.0.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.3.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.11.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.8.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.7.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.9.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.5.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.6.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.4.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.1.intermediate.intermediate_dense.weight', 'roberta.encoder.layer.10.intermediate.intermediate_dense.weight']
- This IS expected if you are initializing EXcellRobertaForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing EXcellRobertaForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the weights of EXcellRobertaForMaskedLM were initialized from the model checkpoint at /app/excell-roberta-training/checkpoint-50/.
If your task is similar to the task the model of the checkpoint was trained on, you can already use EXcellRobertaForMaskedLM for predictions without further training.
```
I would like to check with you if there is any best way that this could be done, or whether it is possible at all without big modifications on transformers.
We plan to eventually, once the model is published to submit a request to add it to the library.
I would also be happy with a contribution of the SwiGLU activation if this would be possible. The main issue I see here is that instantiating a SwiGLU class requires instantiating an extra `nn.Linear` class. This therefore changes the behavior of the typical callables to other activation functions.
I will be happy also to contribute on this topic.
### Your contribution
I have added two main modifications to the original code of RoBERTa:
First, I generated the class `SwiGLU`. I know that this is not the place to define this class, but this has been a test so far.
```python
class SwiGLU(nn.Module):
def forward(self, x):
x, gate = x.chunk(2, dim=-1)
return F.silu(gate) * x
```
The other modification is:
```python
class EXcellRobertaIntermediate(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.intermediate_size, bias=config.dense_layer_bias)
self.swiglu = config.swiglu
if self.swiglu:
self.swiglu = True
self.intermediate_act_fn = SwiGLU()
self.intermediate_dense = nn.Linear(config.intermediate_size//2, config.intermediate_size, bias=config.dense_layer_bias)
elif isinstance(config.hidden_act, str):
self.intermediate_act_fn = ACT2FN[config.hidden_act]
else:
self.intermediate_act_fn = config.hidden_act
def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
if self.swiglu:
hidden_states = self.dense(hidden_states)
hidden_states = self.intermediate_act_fn(hidden_states)
hidden_states = self.intermediate_dense(hidden_states)
else:
hidden_states = self.dense(hidden_states)
hidden_states = self.intermediate_act_fn(hidden_states)
return hidden_states
```
Iwould be happy to contribute with tthe SwiGLU activation and eventually to bring the entire model to transformers. | 11-23-2022 09:50:53 | 11-23-2022 09:50:53 | Indeed, I was able to solve the issue with the loading of the SwiGLU layers using the ugly fix of keeping the activation function definition in `EXcellRobertaConfig` as `gelu`, while adding a `swiglu` parameter that, it set to True, overrides activation function.
I am not sure if this is a recommended procedure... Would expect that it is not.
I would be happy to get any comment on this and contribute with the addition of SwiGLU as an activation function.
<|||||>Hi there! We would recommend you to modify the modeling file to suit your needs, you can then include it with your checkpoint using the [custom model on the Hub](https://huggingface.co/docs/transformers/custom_models#sending-the-code-to-the-hub) feature.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,402 | closed | Cached model files can't be referred in docker container on AWS Lambda | ### System Info
{
"errorMessage": "Can't load tokenizer for 'model_name'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'model_name' is the correct path to a directory containing all relevant files for a BlenderbotTokenizer tokenizer.",
"errorType": "OSError",
"requestId": "",
"stackTrace": [
" File \"/var/lang/lib/python3.9/importlib/__init__.py\", line 127, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n",
" File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\n",
" File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\n",
" File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\n",
" File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\n",
" File \"/var/task/app.py\", line 22, in <module>\n tokenizer = BlenderbotTokenizer.from_pretrained(MODEL_NM,local_files_only=True)\n",
" File \"/var/task/transformers/tokenization_utils_base.py\", line 1761, in from_pretrained\n raise EnvironmentError(\n"
]
}
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I am pretty sure cached model files are included in my image and TRANSFORMERS_CACHE is also set properly when building the image.
I can see that my model files exist in container through log output:
```
models--xxxxxxx : ['blobs', 'refs', 'snapshots']
TRANSFORMERS_CACHE = ./parent_dir_for_model/
```
Code is simply like this, and it works well on my local python 3.10 environment (can automatically go for the path defined in `TRANSFORMERS_CACHE` without trying to download) .
```
tokenizer = BlenderbotTokenizer.from_pretrained(path,local_files_only=True)
model = BlenderbotForConditionalGeneration.from_pretrained(path,local_files_only=True)
```
My version of transformers is 4.24.
I suspect something wrong about the compatibility with Lambda python3.9 runtime?
### Expected behavior
Cached model files should be loaded properly based on the path defined by `TRANSFORMERS_CACHE` . | 11-23-2022 09:16:40 | 11-23-2022 09:16:40 | |
transformers | 20,401 | closed | Use updated attributes when saving tokenizers | # What does this PR do?
Use updated attributes when saving tokenizers.
Fix #20395 | 11-23-2022 08:55:35 | 11-23-2022 08:55:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>For the record: running slow tests with the generic fix:
2 are (By-)T5 issues, other 3 from Bert/RocBert/NLLB
```bash
=FAILED tests/models/bert/test_tokenization_bert_tf.py::BertTokenizationTest::test_saved_model - ValueError: The two structures don't have the same nested structure.
FAILED tests/models/byt5/test_tokenization_byt5.py::ByT5TokenizationTest::test_added_token_serializable - ValueError: Both extra_ids (125) and additional_special_tokens (['new_token']) are provided to ByT5Tokenizer. In this case the additional_special_tokens must include the extra_ids tokens
FAILED tests/models/nllb/test_tokenization_nllb.py::NllbTokenizationTest::test_save_pretrained - ValueError: Non-consecutive added token 'ar_AR' found. Should have index 1229 but has index 1204 in saved vocabulary.
FAILED tests/models/roc_bert/test_tokenization_roc_bert.py::BertTokenizationTest::test_sequence_builders - assert [1, 5, 6, 2] == [101, 5, 6, 102]
FAILED tests/models/t5/test_tokenization_t5.py::T5TokenizationTest::test_added_token_serializable - ValueError: Both extra_ids (100) and additional_special_tokens (['new_token']) are provided to T5Tokenizer. In this case the additional_special_tokens must include the extra_ids tokens
```<|||||>Will try to make tokenization code better, but need to merge in order to unblock tiny model creation (so for pipeline testing and ONNX testing) |
transformers | 20,400 | closed | Fix doctest file path issue | # What does this PR do?
Fix doctest file path issue.
(Currently, the whole suite fails from the beginning) | 11-23-2022 08:13:48 | 11-23-2022 08:13:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20400). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,399 | closed | Tokens truncated if exceeded 512 tensor shape | ### System Info
Hi,
Im using Layoutlmv2 pretrained transformer model, if I put "truncation =True" only half of the image were detected remain not

### Who can help?
@sanchit-gandhi @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
can't modify model weights shape to match the input tensor shape
### Expected behavior
I want to get full predictions, Thanks in advanceπ | 11-23-2022 07:09:34 | 11-23-2022 07:09:34 | Hey @thoufeeq1218! That's a lot of apple crumble! cc'ing in the vision specialist @NielsRogge π<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge ping on this issue.<|||||>Hi,
The recommendation here is to apply a "sliding window" approach, which means that, if your sequence of tokens is > 512, you apply a sliding window with a certain stride (like a window that each time takes 512 tokens as input, and then you shift the window 128 tokens - this is called the stride) and then average predictions for tokens which are part of several windows.
You can specify `return_overflowing_tokens` and `stride` arguments in the processor/tokenizer's call method.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,398 | closed | Generating exactly 1 word (instead of token) using autoregressive LMs | ### Feature request
I am not sure if there exists any way to make the autoregressive type of LMs (like GPT) generate exactly 1 next word (may consists of multiple tokens) instead of just 1 token.
e.g. ` Singing is a kind of ____ -> entertain (token), entertainment (word)`
### Motivation
I came across this problem when trying to evaluate LMs performance on the next word prediction task, especially cloze style prompting (like in the LAMA dataset). AFAIK, most existing solutions just generate 1 token, which could lead to incorrect evaluation.
### Your contribution
My current workaround is to overgenerate then split on whitespace to get the first word. Then to get the scores for the generated word, I would calculate the product of all the constituted tokens. This is in no way optimal, especially when we need to get the top k word predictions as we need to increase the number of beams and return sequences. Moreover, the first few generated tokens would likely to be the same across different beams. | 11-23-2022 00:31:21 | 11-23-2022 00:31:21 | cc @gante <|||||>Hi @joey234 π
By default, we don't directly support that functionality. While generating, the model is unaware of where a word starts and ends -- it exclusively works at a token level. The solution you described is IMO the simplest way to do it, and the engineering effort of a working solution probably exceeds the extra computing time... unless you intend to use it for a large number of models!
I also haven't seen other requests for this feature, so I won't devote resources to building it :) However, if you're interested in building it, I'd be happy to guide you.
P.S.: you mentioned "I would calculate the product" related to scores. Be mindful that our scores are UNORMALIZED LOG-PROBABILITIES, so the probability of a word is `exp(sum of the log_softmax(scores))` or `prod(softmax(scores))` β οΈ
_________________________________________
Here are my two cents on how to build a solution to this problem. You have two options, A and B. If it was me working on this problem, I'd go with B (A seems painful to build). Both option can be easily extended to force generate to stop after N words.
### Option A (most compute efficient)
In essence, you must mix two pieces of knowledge: 1) knowing which tokens correspond to the end of a word; 2) building a custom stopping mechanism.
Piece 1) depends from model to model, and you should refer to our tokenizer documentation to find the needed information. Here's an intro to how tokenizers work: https://huggingface.co/course/chapter2/4?fw=pt
As for 2), you can see related examples of stopping criteria [here](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/stopping_criteria.py). In essence, you want to return `True` when a token from 1) is generated. You'd have to build a new class and pass an instance to `generate` using the `stopping_criteria` argument.
### Option B (easiest to build)
Here you would only need to build a stopping criteria class. In essence, the class would decode the text with the tokenizer at each generation step and, if the word is complete (e.g. if a space is detected), return `True`. As described in option A, you would pass an instance to `generate` using the `stopping_criteria` argument. <|||||>Thank you so much for the guidance π€. I will try building based on the `stopping_criteria` argument.<|||||>@joey234 Hi, any progress? |
transformers | 20,397 | closed | [CodeGen] RuntimeError: where expected condition to be a boolean tensor, but got a tensor with dtype Half | ### System Info
transformers==4.21.2
torch==1.10.2+cu113
HW: 12 cpu, 90gb ram, 1xA100-80g
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
**The following works with CodeGen-350M-mono / CodeGen-6B-mono**
```
import torch
from transformers import AutoTokenizer, CodeGenForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
model = CodeGenForCausalLM.from_pretrained("Salesforce/codegen-350M-mono", device_map="auto")
```
**But throws the RuntimeError for CodeGen-16B-mono**
```
import torch
from transformers import AutoTokenizer, CodeGenForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-mono")
model = CodeGenForCausalLM.from_pretrained("Salesforce/codegen-16B-mono", device_map="auto")
```
### Expected behavior
Expect the 16B model to work as well. | 11-22-2022 23:13:30 | 11-22-2022 23:13:30 | cc @younesbelkada <|||||>Another way to reproduce is to use https://github.com/huggingface/transformers-bloom-inference
and run:
```
make codegen-mono
```<|||||>Hi @jrdzha
Thanks so much for your issue! Unfortunately I did not managed to reproduce your issue. How are you getting this error? Could you provide me a full example script please?
Here is the snippet I used:
```
import torch
from transformers import AutoTokenizer, CodeGenForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-16B-mono")
model = CodeGenForCausalLM.from_pretrained("Salesforce/codegen-16B-mono", device_map="auto")
text = "def main():"
inputs = tokenizer(text, return_tensors="pt")
model.generate(**inputs)
```
Also can you make sure to use the latest version of `accelerate` ? `pip install --upgrade accelerate` + is there anything blocking on your side not to use the latest version of `transformers`? `pip install --upgrade transformers` <|||||>Seems also to be a duplicate of https://github.com/arrmansa/Basic-UI-for-GPT-J-6B-with-low-vram/issues/4 I also ran the experiment with `torch==1.10` and not getting any error ! <|||||>It seems this issue is inconsistent. I did have it work once, but without changing any code... I'll spend some more time trying to reproduce this more reliably...<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,396 | closed | Add type hints for Whisper models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adding type hints for ` Whisper` model (TensorFlow). Related to the issue #16059.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? _Task requested [here](https://github.com/huggingface/transformers/issues/16059#issuecomment-1324302192)._
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? _Ran `make fixup` before last commit._
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-22-2022 22:19:11 | 11-22-2022 22:19:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @Rocketknight1, can you help me understand failed test? The following [message](https://app.circleci.com/pipelines/github/huggingface/transformers/52274/workflows/ccc607ce-79a5-4b31-a567-22e5e638338b/jobs/626467?invite=true#step-111-353) comes from the CI test logs:
```
You have to specify either decoder_input_ids or decoder_inputs_embeds
```
According to Whisper [documentation](https://huggingface.co/docs/transformers/v4.24.0/en/model_doc/whisper#transformers.TFWhisperForConditionalGeneration.call.past_key_values),
> If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds.
But, when I checked the argument `inputs_embeds` set for testing, I noticed that `inputs_embeds=None`.
I'm confused because the model had `=None` for all its parameters before I added the type hints.<|||||>Hey! I think you are right here, the documentation is misleading. This only happens for the `...ForConditionalGeneration` and not for the model. Opening a PR right now to fix this<|||||>@Rocketknight1 this PR is ready for review.
Besides including the type hints for main Whisper models, I changed the `output_type` to `TFSeq2SeqModelOutput` in the `TFWhisperModel`'s _docstrings_ since it has a wrong output (probably a confusion with the `TFWhisperForConditionalGeneration` model). |
transformers | 20,395 | closed | some tokenizer(s) don't save the updated attributes | ### System Info
transformers version: 4.25.0.dev0
Torch version: 1.13.0+cpu
Cuda available: False
Cuda version: None
CuDNN version: None
Number of GPUs available: 0
### Description
For `GPT2Tokenizer(Fast)`, Set `tokenizer.model_max_length` to `128` (originally `1024`), save it then reload, will give `tokenizer.model_max_length` being `1024`.
### Reproduction
```python
from transformers import GPT2Tokenizer, GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
print(tokenizer.model_max_length)
tokenizer.model_max_length = 128
print(tokenizer.model_max_length)
tokenizer.save_pretrained("my-gpt2")
tokenizer_loaded = GPT2TokenizerFast.from_pretrained("my-gpt2")
print(tokenizer_loaded.model_max_length)
```
The output is
```bash
1024
128
1024
```
### Expected behavior
`tokenizer_loaded.model_max_length` should be `128` in the above example. In general, the updated attribute(s) should be saved. | 11-22-2022 21:23:12 | 11-22-2022 21:23:12 | It turns out that we use
https://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/tokenization_utils_fast.py#L735
which is defined
https://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/tokenization_utils_base.py#L1475
Is this the expected behavior, i.e. we don't want to save the modified attributes like `model_max_length`?<|||||>Hi @ydshieh,
This is a very good remark! I've also often wondered... what I'm afraid of is that for some attributes the tokenizer's behavior is different between:
1. A tokenizer initialized with some parameters and then with a parameter that is modified on the fly
2. A tokenizer that would be initialized with the final parameters of the previous tokenizer
What is complicated with tokenizers is that all tokenizers share the lines of code you mention but many of them have specificities implemented on top of them and it's quite hard to be sure that we won't break more things than we fix (and this if we consider that we don't add a breaking change...). I think there must be a reason why historically it was chosen to only save the `init_kwargs` for the `tokenizer_config`:
https://github.com/huggingface/transformers/blob/0ee71188ff184ee5f8b70081665858301fe4afb1/src/transformers/tokenization_utils_base.py#L2084 |
transformers | 20,394 | closed | Regression in CLIPProcessor from 4.24.0 -> 4.25.0.dev0 | ### System Info
- `transformers` version: 4.24.0 / 4.25.0.dev0
- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- Huggingface_hub version: 0.11.0.dev0
- PyTorch version (GPU?): 1.11.0+cpu (False)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu)
- Jax version: 0.3.16
- JaxLib version: 0.3.15
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@amyeroberts @sg
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
There seems to be a regression of `CLIPProcessor` between current `main` and `4.24`
You can easily reproduce it by running the following script with current main `4.25.0.dev0` and `4.24` to see a difference:
```python
#!/usr/bin/env python3
from transformers import CLIPProcessor
import transformers
from PIL import Image
import PIL.Image
import numpy as np
import torchvision.transforms as tvtrans
import requests
from io import BytesIO
print(transformers.__version__)
url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
response = requests.get(url)
image = Image.open(BytesIO(response.content)).convert("RGB")
BICUBIC = PIL.Image.Resampling.BICUBIC
image = image.resize([512, 512], resample=BICUBIC)
image = tvtrans.ToTensor()(image)
np_image = np.asarray(image)
processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14")
pixel_values = processor(images=2 * [np_image], return_tensors="pt").pixel_values
print(pixel_values.abs().sum())
print(pixel_values.abs().mean())
```
The outputs for the different versions are as follows:
```
4.24.0
tensor(287002.5000)
tensor(0.9533)
```
```
4.25.0.dev0
tensor(503418.8125)
tensor(1.6722)
```
The code snippet above comes from reproducing a problem that happens when updating `transformers` to main for https://github.com/SHI-Labs/Versatile-Diffusion .
https://github.com/SHI-Labs/Versatile-Diffusion only works with `transformers==4.24.0` - the pipeline gives random results when using `transformers==4.25.0.dev0`
### Expected behavior
It seems like a bug was introduced for after the 4.24. release. The code snippet above might seem a bit edge-casy but I believe people have started to build any kind of image processing pipelines with CLIP already. | 11-22-2022 19:38:13 | 11-22-2022 19:38:13 | |
transformers | 20,393 | closed | can't from transformers import TFBertModel | ### System Info
- `transformers` version: 4.24.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.8
- Huggingface_hub version: 0.11.0
- PyTorch version (GPU?): 1.13.0+cpu (False)
- Tensorflow version (GPU?): 2.11.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@LysandreJik @Rocket
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction

### Expected behavior
from transformers import BertTokenizer works well.
but from transformers import TFBertModel doesn't work .
It runs like picture above.
How do I resolve this error(ModuleNotFoundError: No module named 'keras.saving.hdf5_format')? | 11-22-2022 19:27:53 | 11-22-2022 19:27:53 | This is due to the latest release of TensorFlow which broke many things. You need to either install `transformers` from source (this is fixed on the main branch) or downgrade `Tensorflow` to 2.10. :-)<|||||>This also broke the `evaluate` CI :) Will fix to `<=2.10` for now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,392 | open | Add BART-LS | ### Model description
BART-LS (Long Bart), presented in this [paper](https://arxiv.org/pdf/2209.10052.pdf), establishes a new SOTA on a number of NLP tasks and long form datasets. It uses pooling-augmented block-wise attention and a novel pre-training strategy to achieve this.
Given my interest in long text summarisation I'm very keen to get this into the wonderful transformers library and to start benchmarking it against other models. Therefore, happy to take this on and ping any members of the team if I face any blockers. If this fits with the library's plans let me know and I'll start working on a PR for this.
### Open source status
- [X] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
Original Model Repo (which includes the model weights): https://github.com/facebookresearch/bart_ls | 11-22-2022 17:52:13 | 11-22-2022 17:52:13 | Any update on this ? @KMFODA I can help please let me know if you want to collaborate on this ?<|||||>Hey @thakursc1 I'm still waiting on someone from the HF team to confirm if this can be integrated into their codebase if we work on this as this only becomes beneficial for my use case if I can use it in the transformers master branch.
Happy to collaborate on this as soon as we hear back.<|||||>Hey @KMFODA, wondering if there are any updates on this? Thanks! <|||||>Hey @jmzeng. I haven't heard back from anyone from the HF team yet and unfortunately a few things have changed in my workloads and I don't think I'll be able to work on this. Maybe someone else can work on this if they have the bandwidth and ping the HF team when they have a draft PR for them to review?<|||||>@KMFODA BART-LS looks like it would be a great addition to the library :)
If you or another community member would still like to add the model, please feel free to open a PR and let us know in the meantime if there's any difficulties integrating it. |
transformers | 20,391 | closed | Generate: fix plbart generation tests | # What does this PR do?
As the title says :) | 11-22-2022 17:16:07 | 11-22-2022 17:16:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,390 | closed | [OPT/Galactica] Load large `galactica` models | # What does this PR do?
This PR fixes a small bug on `OPT`. Before, the `bias` term [was always set to `True`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/opt/modeling_opt.py#L277) - leading to some external implementations to hardcode it if they wanted to train an OPT model without bias terms. See for example [here](https://github.com/paperswithcode/galai/blob/c1e16979c1748e7e823fe96da941d6df60f1006b/galai/architecture.py#L280). This PR aims to give more control on whether we should use or not `bias` terms on `Linear` layers of OPT.
The PR also fixes the same issue with `nn.LayerNorm`. Some derivatives of OPT does not use learnable parameters for layer norm's weights and biases (ie, set `elementwise_affine` to `False`), therefore avoids having hardcoded hacks in the future.
This PR should not be a breaking change as the default values of these booleans are set to `True` (as we were doing nothing)
This PR should also fix: https://huggingface.co/facebook/galactica-30b/discussions/4 (ofc, after updating the relevant config files)
cc @sgugger @ydshieh @mrm8488
All slow tests pass
| 11-22-2022 17:01:29 | 11-22-2022 17:01:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I am not 100% sure if this is the approach we want to have, despite I can understand the intention. Would like to hear from @sgugger.
For reference, `class OPTDecoderLayer` from `galai` does pass `bias` to `OPTAttention`
https://github.com/paperswithcode/galai/blob/c1e16979c1748e7e823fe96da941d6df60f1006b/galai/architecture.py#L280<|||||>Yes, I think it was a mistake from our side. We should either port a new model (with controlable bias and layer norm) and remove the `bias` boolean from `OPTAttention` as it is always set to `True` or go with this fix<|||||>Thanks!
Sorry for the last minute clarification as I just realized that the description and title are not clear, but the main goal of this PR is to support loading and using large `galatica` models that uses `OPT` architecture, initially reported in: https://huggingface.co/facebook/galactica-30b/discussions/4 / therefore the title + description is slightly misleading
The snippet to reproduce:
```
import torch
from transformers import AutoTokenizer, OPTForCausalLM, AutoModel
tokenizer = AutoTokenizer.from_pretrained("facebook/galactica-30b")
model = OPTForCausalLM.from_pretrained("facebook/galactica-30b", device_map="auto", torch_dtpye=torch.float16)
input_text = "The Transformer architecture [START_REF]"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
In case we don't merge this PR we may be want to add `galatica` as a separate new architecture as some `galactica` models (such as `30b`) does not use `bias` on linear layers and don't have any learnable weights on their `LayerNorm`<|||||>I understood, and yes, that will be the alternative if this PR is declined :-)<|||||>Hi @sgugger and @younesbelkada, it's one of the Galactica authors here. We think that there might be something wrong with the 30bn model specifically on HuggingFace. We're currently migrating our galai library to use the huggingface model without our custom OPT config. There seems to have been a conversion process applied to our models to give null weights to the biases (or something else similar to the OPT models), but specifically not on the 30bn file. Hopefully, this can be resolved without a PR by fixing the model file. See the great investigations done by @Jackmin801 on this ticket https://github.com/paperswithcode/galai/issues/37#issuecomment-1323929437<|||||>> For reference, `class OPTDecoderLayer` from `galai` does pass `bias` to `OPTAttention`
>
Hi @ydshieh, the `bias` flag is passed only so that `Galactica` extension of `OPT` architecture is backward compatible. We set all the additional config parameters to the values used by `OPT` (see https://github.com/paperswithcode/galai/blob/main/galai/config.py#L92-L95) so that `OPT` checkpoints work as before, but we set them accordingly in `Galactica` configs (see f.e., https://huggingface.co/mrm8488/galactica-125m/blob/main/config.json#L18). Whether these changes should be ported back to `modeling_opt` or the `Galactica` should be forked-out from it depends on how much it deviates from the general philosophy of Transformers as @sgugger noted.<|||||>Hi @AnthonyHartshorn
Thanks a lot for your message. Indeed, big kudos to @Jackmin801 for the investigation, his investigation in https://huggingface.co/facebook/galactica-30b/discussions/4#637e90571dbae0919104b582 helped me define the rootcause of the bug.
I guess it can be also fixed by saving zero bias and ones for the layer norms, updating the weights on the hub with the new ones can do the trick too yes.<|||||>As @sgugger said above, this goes very clearly against the foundation of `transformers` to add configurable parameters to a previous model architecture to support a new model architecture.
However, fixing this any other way would result in some setups breaking in the wild; it would require us to update the architecture name to `galactica` instead of `opt` which would break every existing setup that currently uses these models unless the upgrade to the latest version.
Given that, I'm also inclined to accept this change even if it goes against our design decisions. If we could do it all over again however, I would heavily push for a new model architecture.<|||||>Thanks @LysandreJik for approving this PR. I have another related question. As pointed by Jackmin801 in the comment linked above by Anthony (https://github.com/paperswithcode/galai/issues/37#issuecomment-1323929437), almost all of the checkpoints were converted from our float16 checkpoints and uploaded to the hub in full float32 precision (except for 30B which is an exact copy). That's not the best for user experience: download time, disk usage and loading time doubles for no benefit. I wonder if we can fix it, there are couple options I see:
* upload our float16 checkpoints once this PR is merged. This would not be backward compatible as this PR is required,
* do the same conversion that @mrm8488 did, but `.half()` the models before exporting. This would be almost backward compatible except for the case when a user doesn't specify `pytorch_dtype` when loading a model, as after that the models would load as float16 by default,
* keep the existing checkpoints, potentially fix the 30B to be float32 as well for consistency (it wasn't working before this PR anyway). Not the best user experience,
* add new checkpoints galactica-125m-fp16, ..., galactica-120b-fp16. Might be too confusing for users.
What do you think? I'm in favor of the second option as it's the best for backward compatibility and user experience.<|||||>PyTorch automatically converts checkpoint weights to the dtype of the model when you load the state_dict, so option 2 is actually 100% backward compatible.<|||||>Thanks @sgugger, I missed the fact that `torch_dtype` is part of `config.json`.<|||||>This PR is ok for me - galactica is build on top of OPT so one could fine-tune OPT using these two configs => so this PR is def ok for me<|||||>Thanks everyone!
@mkardas @mrm8488 : https://huggingface.co/facebook/galactica-30b/discussions/5 since now this PR has been merged, can you merge this PR to fix the initial issue for `30b` ? <|||||>@younesbelkada I'm not a member of the org yet. I've verified my work email address, but wasn't auto-added. How can I learn who are the admins?<|||||>@patrickvonplaten can you add me to https://huggingface.co/facebook (same username)?<|||||>I can merge the PR if this is the only thing needed! π€
<|||||>Thanks @ArthurZucker. I was working on providing float16 weights in the backward compatible way as discussed above. I think it's best to just fix all the checkpoints to make them float16 and keep zero biases for backward compatibility with HF 4.21.0-4.24.0. I'm in the middle of preparing a new HF hub PR for this, I'll let you know in case I won't be able to merge it.
@sgugger
From my tests on backward compatibility, it seems that calling `OPTForCausalLM.from_pretrained` with `torch_dtype=None, device_map=None` results in `float32` weights regardless of what's in the checkpoint bin files and `config.json`. However, `torch_dtype=None, device_map="auto"` results in the same weights type as in the checkpoint bin files, regardless of `config.json`. Is it to be expected?<|||||>I think this is expected as if you want to load a model natively without using `accelerate` (i.e. without adding `device_map="auto"`), `transformers` will automatically load the weights in fp32, in this case whenever you want to load a model with its native dtype of the weights you need to use `torch_dtype="auto"`. <|||||>Mmm, no. If it is indeed the case then it's a bug. Do you have a small reproducer/repo ID I could look at?<|||||>This is what I used:
```python
import torch
from transformers import OPTForCausalLM
for device_map in [None, "auto"]:
for dtype in [None, torch.float16, torch.float32]:
model = OPTForCausalLM.from_pretrained(
"facebook/galactica-125m",
revision="refs/pr/6",
torch_dtype=dtype,
device_map=device_map
)
print(f"[device_map={device_map}]: {dtype} -> {model.lm_head.weight.dtype}")
print()
```
What I get for `refs/pr/6` (which has `torch_dtype=float32` in config.json and `float16` bin files):
```
[device_map=None]: None -> torch.float32
[device_map=None]: torch.float16 -> torch.float16
[device_map=None]: torch.float32 -> torch.float32
[device_map=auto]: None -> torch.float16
[device_map=auto]: torch.float16 -> torch.float16
[device_map=auto]: torch.float32 -> torch.float32
```
For `facebook/opt-125m` the output is the same, even though `opt-125m` has `float16` both in config.json and bin files.<|||||>PRs replacing the existing `float32` checkpoints with `float16` checkpoints:
https://huggingface.co/facebook/galactica-125m/discussions/6
https://huggingface.co/facebook/galactica-1.3b/discussions/6
https://huggingface.co/facebook/galactica-6.7b/discussions/8
https://huggingface.co/facebook/galactica-30b/discussions/6
https://huggingface.co/facebook/galactica-120b/discussions/6<|||||>Found the issue. The PR mentioned above should make the result consistent between `device_map=None` and `device_map="auto"`. |
transformers | 20,389 | closed | [WIP] Add a `get_token_embeddings_size` to `PreTrainedModel` | Adds an intuitive and obvious method, `get_token_embeddings_size()`, to get the size of the current token embeddings on `PreTrainedModel`. This can be used to compute the new size when calling `resize_token_embeddings()`.
### Motivation
Current API design around the `resize_token_embeddings()` method requires doing the following to increase the size of the token embeddings by 1:
```python
# add 1 new embedding
current_embeddings = my_xformer.resize_token_embeddings(None)
new_embeddings_size = current_embeddings.num_embeddings + 1
my_xformer.resize_token_embeddings(new_embeddings_size)
```
This is counterintuitive and bleeds implementation details to the call site. It requires me to know
1. that calling a "resize" method with the argument `None` returns the object to be resized (which is not intuitive), and
2. that "size" means the property `num_embeddings` on the returned object (admittedly it's not difficult to guess, but it is still a *guess*, and is in fact an implementation detail that I shouldn't need to know).
# What does this PR do?
This PR enables instead the following:
```python
# add 1 new embedding
current_embeddings_size = my_xformer.get_token_embeddings_size()
new_embeddings_size = current_embeddings_size + 1
my_xformer.resize_token_embeddings(new_embeddings_size)
```
This provides an intuitively-named method to determine the current "size", and appropriately hides the implementation detail that "size" means the `num_embeddings` property on the object to be resized.
Fixes #20377 .
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [*] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [*] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. - #20377 .
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-22-2022 16:10:50 | 11-22-2022 16:10:50 | WIP because missing tests, i guess. I don't have a local environment set up atm, this was a quick edit on github. <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20389). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>i still am interested in finishing this, just gotta find some time.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,388 | closed | Generate: use `GenerationConfig` as the basis for `.generate()` parametrization | # What does this PR do?
This PR introduces `generation_config` as the main controller of `.generate()` calls.
In particular:
1. It adds a `from_model_config` class method to `GenerateConfig`, to load a generation config from a (legacy) model config;
2. Adds a `generation_config` argument to `.generate()`. If it is not passed, it will be loaded from a pre-determined sequence (check for `generation_config.json` -> if it fails, load from the model config);
3. Because we always have a `generation_config` in `.generate()`, which holds all parametrization, gets rid of all local variables;
4. β οΈ Changes the arguments to `generate()` (and corresponding docstring) so as to exclude `generate_config` parameters (i.e. they were moved to `**kwargs`). This is mostly to avoid a massive docstring and list of arguments that make `.generate()` very messy at the moment -- `GenerationConfig`'s docstring explains all the ways `.generate()` can be controlled, organized by type of manipulation, while `.generate()`'s docstring focuses on the API.
Notes: I've successfully run SLOW tests of GPT2 (which has a `generate_config.json`) and BART (which does not) against this PR. | 11-22-2022 15:56:24 | 11-22-2022 15:56:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Fully agree with @sgugger here.
Totally ok to just link to the `GenerateConfig` doc page -> think this make the docs online also cleaner actually.
Also I'd maybe rename `generate_config` to just `config` in generate or do you think this will cause confusion with the model's config?<|||||>Overall, this is a great improvement !<|||||>@sgugger @patrickvonplaten It is ready for review.
Major changes since the last review request:
1. `ModelClass.from_pretrained()` pre-loads a `generation_config` attribute to the model if a `generation_config.json` exists, as suggested above
2. Handle the case where the model config has nested dictionaries (e.g. a `decoder` component)
3. Keep full retrocompatibility, including ad hoc `model.config` changes before calling `GenerationMixin` functions (that's why you'll see `GenerationConfig.from_model_config` in so many places, all those functions may be called independently π )
4. Add documentation and enhance examples
Also FYI, I'm off until the 8th of Dec π΄ <|||||>Agreed with you @patrickvonplaten , that's a very good idea!<|||||>@sgugger @patrickvonplaten
Here is a summary of the key changes since your last review:
- (thanks for the suggestion!) In `model.from_pretrained`, `model.generation_config` is set from the model config if the generation config doesnβt exist, effectively making all future generation-capable models hold a default generation config parameter. NOTE: This required minor legacy handling logic, for the case where the user makes ad hoc model config changes to control generation (which the previous solution intentionally accounted for)
- added a default `prepare_inputs_for_generation`, which raises `NotImplementedError`, and updated the new `can_generate` check accordingly. Contrarily to @patrickvonplaten's suggestion, I've kept the `_validate_model()` check -- it returns an informative exception to the user if they try to generate with an incorrect class of a model with generation capabilities, like `AutoModel.from_pretrained(βgpt2β)`. Not using the right class was a common source of issues in the past.
- Improved the example to use named generation config files with an actual T5 example. I think two named generation configs would make the example too long π€ (cc @patrickvonplaten)
I was thinking of doing the following in a follow-up PR (to avoid adding more features to this already long PR that is blocking Arthur on Whisper work):
- Add the needed modifications such that `model.save_pretrained` can push to the hub a default generation config if the file doesnβt yet exist, from the `model.generation_config` parameter (as @sgugger suggested)<|||||>@patrickvonplaten -- I'm merging to unblock @ArthurZucker's work on Whisper.
Comments to the points above are still helpful, and I can include them in a subsequent PR! :D <|||||>The addition of `can_generate()` is breaking in Optimum, where we use `generate()` on models which do not inherit from `PreTrainedModel`. Why isn't `can_generate()` in `GenerationMixin`? Can a model inherit from `GenerationMixin` but not use `generate()`? cc @gante <|||||>@fxmarty `can_generate()` is called in `PreTrainedModel` at initialization time, to initialize the (new) generation config if it's a generation-compatible model. All models in `transformers` inherit `GenerationMixin`, regardless of whether they can generate, but in fact `can_generate()` is tangling the two classes at the moment, which is undesirable.
I may be able to rework this part, but I need to know -- what breaks on your end exactly?<|||||>> All models in transformers inherit GenerationMixin
Yes thanks, I forgot this part!
The PR I linked fix the issue on our end. I think what is breaking is that `generate()` is no more usable on models that are not inheriting from `PreTrainedModel` or that don't redefine `can_generate()`, because of https://github.com/huggingface/transformers/blob/8637316e5e94ba0a2493e5df7846f2f23f46eaef/src/transformers/generation/utils.py#L934
But it's a very minor issue, and the fix is easy, so it's probably not too important.<|||||>@fxmarty π
In the long run, I'd like to see if it's possible to separate the two (`PreTrainedModel` and `GenerationMixin`, where a model only inherits `GenerationMixin` if it can generate). It should help libraries downstream like `optimum`!
Let me know if I can be of further assistance. |
transformers | 20,387 | closed | [ESM] fix `accelerate` tests for esmfold | # What does this PR do?
FIxes slow tests that were not passing for `ESM`. In fact I was running `RUN_SLOW=1 pytest tests/models/esm/test_modeling_esm.py` and forgot to run `RUN_SLOW=1 pytest tests/models/esm/test_modeling_esmfold.py`
cc @sgugger @Rocketknight1 | 11-22-2022 15:35:50 | 11-22-2022 15:35:50 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,386 | closed | chore: add link to the video cls notebook. | We recently added a [notebook](https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb) that shows how to fine-tune the VideoMAE model on a custom dataset. This PR adds the notebook link to the model doc.
Cc: @osanseviero | 11-22-2022 14:21:23 | 11-22-2022 14:21:23 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge I don't have the rights to merge the PR. If you have, could you perform the duties? |
transformers | 20,385 | closed | Indicate better minimal version of PyTorch in big model inference | # What does this PR do?
As pointed out in https://github.com/huggingface/accelerate/issues/880, the minimum version when using a `device_map` is not PyTorch 1.9 but PyTorch 1.11. This PR fixes that. | 11-22-2022 14:12:16 | 11-22-2022 14:12:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,384 | closed | More TF int dtype fixes | This PR fixes (hopefully) the last remaining TF int dtype issues.
- [x] Ensure all integer dummy inputs are int32 and add test
- [x] Ensure all serving signatures support all-int32
- [x] Check that this fixes `test_saved_model_creation_extended`
- [x] Add a second serving signature when saving with `save_pretrained` to support both int dtypes | 11-22-2022 13:16:22 | 11-22-2022 13:16:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Update: This PR now updates all serving signatures and dummy inputs to `tf.int64`. All tests like `test_saved_model_creation_extended` now pass.
However, I couldn't find any way to cleanly save both a `tf.int32` and a `tf.int64` signature. The reason is that our serving methods currently get their signatures from their tf.function() decorator. This decorator makes it very difficult to extract the underlying function, and we can't call `get_concrete_function` on it with a signature that doesn't match its existing signature. This meant it was quite hard even to insert a shim in `save()` or `save_pretrained()` to save both signatures!
I think we might just have to encourage people onto `tf.int64` as the standard int dtype when exporting models, and require them to pass their own custom signature/function is they want something else. Cc @gante @ydshieh @amyeroberts . If anyone has any good ideas, I'm open!<|||||>@gante that's a really good point - models serialized this way might throw errors if users pass tokens straight from our tokenizers, which doesn't seem very sensible for us to do. Let me keep digging - maybe I can find some hack to export an int32 and int64 serving signature, because that's probably too big of a problem to just leave in the codebase.
(Though right now the situation is that models serve either only int32 or only int64 based on what's in the serving signature, which differs between models, and that might be even worse)<|||||>Update (with a bit of a deep dive into the TF internals):
When you save a subclassed `Model` as `SavedModel`, the default signature is the forward pass of the model, with the input shape and dtypes that were used the *first* time the model was called. In the TF internals, what happens is that `model._set_save_spec()` is called when the model is built with those first inputs. This records the 'spec' of those inputs, and that spec is used to create the model trace for the SavedModel.
We used to have a huge number of problems because we build our models by passing tiny dummy inputs to them, which locked in that tiny dummy shape as the `save_spec` of the model. We get around that now by calling `_set_save_spec()` in the `__init__()` of the model, and passing flexible shapes (with `None` dimensions). This mostly works well!
The one downside is that even though we can save a spec with flexible shapes by doing this, there's no way to save a spec with flexible dtypes, and multiple specs aren't supported.
To save multiple traces with a `SavedModel`, you can use the `signatures` argument to `model.save()`. However, note that these can be a bit hard for users to find - if the user calls the model directly, they will only get the trace from the `save_spec`, which is locked to one dtype. In other words, you get behaviour like this (assuming the save_spec was `tf.int64`):
```python
# This works
loaded_model(int64_inputs)
# This complains that it can't find a matching signature
loaded_model(int32_inputs)
```
If you explicitly pass a `tf.int32` signature as well as a `tf.int64` signature to the `signatures` argument of `model.save()`, this doesn't fix the issue above. However, you will be able to do this:
```python
# This works
loaded_model.signatures["serving_int32"](int32_inputs)
```
Ideally, I'd like the `SavedModel` to be directly callable with `int32` or `int64`, but I haven't been able to find any way to make this possible, so unfortunately I think we have to pick a 'preferred' int dtype and support the other one only via an awkward call to `model.signatures`. I've added a commit to support this, but I still wish I could find a better approach.<|||||>I believe you re-run the target test `test_saved_model_creation_extended` and maybe a few ones after this change, right?
If so, still good for me π― !<|||||>Yes, I checked those tests again! |
transformers | 20,383 | closed | [ViT] `'str' object cannot be interpreted as an integer` | ### System Info
using `transformers==4.24.0`, the snippet:
```
import requests
from PIL import Image
from transformers import AutoProcessor
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# Use the same feature extractor for everyone
feature_extractor = AutoProcessor.from_pretrained("hf-internal-testing/tiny-random-ViTModel")
inputs = feature_extractor(images=image, return_tensors="pt")
```
fails with the error:
```
2022-11-22 13:06:49.965720: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-22 13:06:50.154832: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2022-11-22 13:06:50.732918: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/nccl2/lib:/usr/local/cuda/extras/CUPTI/lib64
2022-11-22 13:06:50.733007: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/nccl2/lib:/usr/local/cuda/extras/CUPTI/lib64
2022-11-22 13:06:50.733017: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Traceback (most recent call last):
File "/home/younes_huggingface_co/scratch/test_processor.py", line 10, in <module>
inputs = feature_extractor(images=image, return_tensors="pt")
File "/home/younes_huggingface_co/miniconda3/envs/fix-bnb-test/lib/python3.10/site-packages/transformers/models/vit/feature_extraction_vit.py", line 144, in __call__
images = [self.resize(image=image, size=self.size, resample=self.resample) for image in images]
File "/home/younes_huggingface_co/miniconda3/envs/fix-bnb-test/lib/python3.10/site-packages/transformers/models/vit/feature_extraction_vit.py", line 144, in <listcomp>
images = [self.resize(image=image, size=self.size, resample=self.resample) for image in images]
File "/home/younes_huggingface_co/miniconda3/envs/fix-bnb-test/lib/python3.10/site-packages/transformers/image_utils.py", line 418, in resize
return image.resize(size, resample=resample)
File "/home/younes_huggingface_co/miniconda3/envs/fix-bnb-test/lib/python3.10/site-packages/PIL/Image.py", line 2082, in resize
return self._new(self.im.resize(size, resample, box))
TypeError: 'str' object cannot be interpreted as an integer
```
However, using the main branch fixes the issue, so just flagging it!
cc @sgugger
| 11-22-2022 13:09:20 | 11-22-2022 13:09:20 | cc @amyeroberts <|||||>False alarm!
the snippet works when using `google/vit-base-patch16-224`
```
import requests
from PIL import Image
from transformers import AutoProcessor
model_id = "google/vit-base-patch16-224"
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# Use the same feature extractor for everyone
feature_extractor = AutoProcessor.from_pretrained(model_id)
inputs = feature_extractor(images=image, return_tensors="pt")
```
so I expect the fix to be on `hf-internal-testing/tiny-random-ViTModel`, updating `size` attribute by an integer seems to fix the issue.
Opened a PR in: https://huggingface.co/hf-internal-testing/tiny-random-ViTModel/discussions/2<|||||>I don't think the model should be updated. It does require a source install of Transformers but it was also added since the last release, so it's fair IMO.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,382 | closed | Finetunig of wav2vec2-xls-r-300m outputs invalid words for Bengali data | ### System Info
I have used wav2vec2 pretrained model of wav2vec2-xls-r-300m, and finetuned it to 1000hrs Bengali dataset. Training took 4 full days with 20 epochs. But, there is issue in decoding. It is decoding in some arbitrary fashion, basically outputs random combination of Bengali letters (which does not have any meaning as confirmed by Bengali natives). It is showing a WER of 100% for all the sentences.
My code is based on the notebook at https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLS_R_on_Common_Voice.ipynb
@patrickvonplaten, @anton-l @sanchit-gandhi Pls suggest on what could have gone wrong. Should I use fairseq & redo the experiments?
Thanks
Eval loss graph on tensorboard looks like below.

### Who can help?
@patrickvonplaten, @anton-l, @sanchit-gandhi
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Finetuning on XLSR 300m models using Bengali dataset resulted this behaviour
### Expected behavior
WER should have been less than 100%, and should have outputted reasonably readable hypothesis words | 11-22-2022 12:26:26 | 11-22-2022 12:26:26 | Hey @manjuke! Cool to see you're using XLS-R for fine-tuning on Bengali. The "issues" page on Transformers is reserved for issues related to the Transformers modelling code. For questions related to fine-tuning experiments, the forum is the best place to ask: https://discuss.huggingface.co
Could you copy your question over there? If you tag me (@sanchit-gandhi) I'll aim to answer quickly! Could you also provide a script / colab / command to reproduce this behaviour? From your eval/loss curve, it looks like you're overfitting quite drastically on your training set. This shouldn't happen for a dataset with 1k training hours, so something is definitely up!<|||||>Sure I have created a new ticket @ https://discuss.huggingface.co/t/finetunig-of-wav2vec2-xls-r-300m-outputs-invalid-words-for-bengali-data/26507<|||||>@sanchit-gandhi Could you please reply to my issue raised on hugging face. Awaiting response. Thanks<|||||>Replied on the forum!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,381 | closed | Revert `keys_to_ignore` for M2M100 | # What does this PR do?
This PR reverts a change that has been made on `M2M100Model` , to reproduce you can run:
```
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
src_lang = "eng_Latn"
tgt_lang = "spa_Latn"
model_id = "facebook/nllb-200-3.3B"
tokenizer = AutoTokenizer.from_pretrained(model_id, src_lang=src_lang)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")
```
on `main`, users get:
```
β /home/younes_huggingface_co/debug_issues/code/transformers/src/transformers/modeling_utils.py:24 β
β 59 in _load_pretrained_model β
β β
β 2456 β β β for key in missing_keys: β
β 2457 β β β β if key.startswith(prefix): β
β 2458 β β β β β key = ".".join(key.split(".")[1:]) β
β β± 2459 β β β β param = model_state_dict[key] β
β 2460 β β β β if param.device == torch.device("meta"): β
β 2461 β β β β β if not load_in_8bit: β
β 2462 β β β β β β set_module_tensor_to_device(model, key, "cpu", torch.empty(*para β
β°βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ―
KeyError: 'encoder.embed_positions.weights'
```
cc @Narsil | 11-22-2022 11:05:48 | 11-22-2022 11:05:48 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20381). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,380 | closed | [ResNet] Improve backbone | # What does this PR do?
This PR improves the ResNetBackbone, by not assuming stages are always 4, and improving tests. | 11-22-2022 10:42:20 | 11-22-2022 10:42:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,379 | closed | Add `accelerate` support for `ESM` | # What does this PR do?
This PR adds `accelerate` support for `ESM` models, so that the largest `ESM` models (15B params) can be loaded in 8-bit, therefore easing accessibility for large protein models. This also introduces the first protein model that can be loaded in 8bit.
```
# pip install accelerate bitsandbytes
from transformers import AutoModel
model = AutoModel.from_pretrained("acebook/esm2_t48_15B_UR50D", device_map="auto", load_in_8bit=True)
```
cc @sgugger @Rocketknight1
slow tests pass | 11-22-2022 10:34:51 | 11-22-2022 10:34:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,378 | closed | Bump pillow from 9.0.1 to 9.3.0 in /examples/research_projects/decision_transformer | Bumps [pillow](https://github.com/python-pillow/Pillow) from 9.0.1 to 9.3.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/python-pillow/Pillow/releases">pillow's releases</a>.</em></p>
<blockquote>
<h2>9.3.0</h2>
<p><a href="https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html">https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html</a></p>
<h2>Changes</h2>
<ul>
<li>Initialize libtiff buffer when saving <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6699">#6699</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Limit SAMPLESPERPIXEL to avoid runtime DOS <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6700">#6700</a> [<a href="https://github.com/wiredfool"><code>@βwiredfool</code></a>]</li>
<li>Inline fname2char to fix memory leak <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6329">#6329</a> [<a href="https://github.com/nulano"><code>@βnulano</code></a>]</li>
<li>Fix memory leaks related to text features <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6330">#6330</a> [<a href="https://github.com/nulano"><code>@βnulano</code></a>]</li>
<li>Use double quotes for version check on old CPython on Windows <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6695">#6695</a> [<a href="https://github.com/hugovk"><code>@βhugovk</code></a>]</li>
<li>GHA: replace deprecated set-output command with GITHUB_OUTPUT file <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6697">#6697</a> [<a href="https://github.com/nulano"><code>@βnulano</code></a>]</li>
<li>Remove backup implementation of Round for Windows platforms <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6693">#6693</a> [<a href="https://github.com/cgohlke"><code>@βcgohlke</code></a>]</li>
<li>Upload fribidi.dll to GitHub Actions <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6532">#6532</a> [<a href="https://github.com/nulano"><code>@βnulano</code></a>]</li>
<li>Fixed set_variation_by_name offset <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6445">#6445</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Windows build improvements <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6562">#6562</a> [<a href="https://github.com/nulano"><code>@βnulano</code></a>]</li>
<li>Fix malloc in _imagingft.c:font_setvaraxes <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6690">#6690</a> [<a href="https://github.com/cgohlke"><code>@βcgohlke</code></a>]</li>
<li>Only use ASCII characters in C source file <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6691">#6691</a> [<a href="https://github.com/cgohlke"><code>@βcgohlke</code></a>]</li>
<li>Release Python GIL when converting images using matrix operations <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6418">#6418</a> [<a href="https://github.com/hmaarrfk"><code>@βhmaarrfk</code></a>]</li>
<li>Added ExifTags enums <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6630">#6630</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Do not modify previous frame when calculating delta in PNG <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6683">#6683</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Added support for reading BMP images with RLE4 compression <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6674">#6674</a> [<a href="https://github.com/npjg"><code>@βnpjg</code></a>]</li>
<li>Decode JPEG compressed BLP1 data in original mode <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6678">#6678</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>pylint warnings <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6659">#6659</a> [<a href="https://github.com/marksmayo"><code>@βmarksmayo</code></a>]</li>
<li>Added GPS TIFF tag info <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6661">#6661</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Added conversion between RGB/RGBA/RGBX and LAB <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6647">#6647</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Do not attempt normalization if mode is already normal <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6644">#6644</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Fixed seeking to an L frame in a GIF <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6576">#6576</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Consider all frames when selecting mode for PNG save_all <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6610">#6610</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Don't reassign crc on ChunkStream close <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6627">#6627</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Raise a warning if NumPy failed to raise an error during conversion <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6594">#6594</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Only read a maximum of 100 bytes at a time in IMT header <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6623">#6623</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Show all frames in ImageShow <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6611">#6611</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Allow FLI palette chunk to not be first <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6626">#6626</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>If first GIF frame has transparency for RGB_ALWAYS loading strategy, use RGBA mode <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6592">#6592</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Round box position to integer when pasting embedded color <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6517">#6517</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Removed EXIF prefix when saving WebP <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6582">#6582</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Pad IM palette to 768 bytes when saving <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6579">#6579</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Added DDS BC6H reading <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6449">#6449</a> [<a href="https://github.com/ShadelessFox"><code>@βShadelessFox</code></a>]</li>
<li>Added support for opening WhiteIsZero 16-bit integer TIFF images <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6642">#6642</a> [<a href="https://github.com/JayWiz"><code>@βJayWiz</code></a>]</li>
<li>Raise an error when allocating translucent color to RGB palette <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6654">#6654</a> [<a href="https://github.com/jsbueno"><code>@βjsbueno</code></a>]</li>
<li>Moved mode check outside of loops <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6650">#6650</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Added reading of TIFF child images <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6569">#6569</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Improved ImageOps palette handling <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6596">#6596</a> [<a href="https://github.com/PososikTeam"><code>@βPososikTeam</code></a>]</li>
<li>Defer parsing of palette into colors <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6567">#6567</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Apply transparency to P images in ImageTk.PhotoImage <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6559">#6559</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Use rounding in ImageOps contain() and pad() <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6522">#6522</a> [<a href="https://github.com/bibinhashley"><code>@βbibinhashley</code></a>]</li>
<li>Fixed GIF remapping to palette with duplicate entries <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6548">#6548</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Allow remap_palette() to return an image with less than 256 palette entries <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6543">#6543</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
<li>Corrected BMP and TGA palette size when saving <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6500">#6500</a> [<a href="https://github.com/radarhere"><code>@βradarhere</code></a>]</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/python-pillow/Pillow/blob/main/CHANGES.rst">pillow's changelog</a>.</em></p>
<blockquote>
<h2>9.3.0 (2022-10-29)</h2>
<ul>
<li>
<p>Limit SAMPLESPERPIXEL to avoid runtime DOS <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6700">#6700</a>
[wiredfool]</p>
</li>
<li>
<p>Initialize libtiff buffer when saving <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6699">#6699</a>
[radarhere]</p>
</li>
<li>
<p>Inline fname2char to fix memory leak <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6329">#6329</a>
[nulano]</p>
</li>
<li>
<p>Fix memory leaks related to text features <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6330">#6330</a>
[nulano]</p>
</li>
<li>
<p>Use double quotes for version check on old CPython on Windows <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6695">#6695</a>
[hugovk]</p>
</li>
<li>
<p>Remove backup implementation of Round for Windows platforms <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6693">#6693</a>
[cgohlke]</p>
</li>
<li>
<p>Fixed set_variation_by_name offset <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6445">#6445</a>
[radarhere]</p>
</li>
<li>
<p>Fix malloc in _imagingft.c:font_setvaraxes <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6690">#6690</a>
[cgohlke]</p>
</li>
<li>
<p>Release Python GIL when converting images using matrix operations <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6418">#6418</a>
[hmaarrfk]</p>
</li>
<li>
<p>Added ExifTags enums <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6630">#6630</a>
[radarhere]</p>
</li>
<li>
<p>Do not modify previous frame when calculating delta in PNG <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6683">#6683</a>
[radarhere]</p>
</li>
<li>
<p>Added support for reading BMP images with RLE4 compression <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6674">#6674</a>
[npjg, radarhere]</p>
</li>
<li>
<p>Decode JPEG compressed BLP1 data in original mode <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6678">#6678</a>
[radarhere]</p>
</li>
<li>
<p>Added GPS TIFF tag info <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6661">#6661</a>
[radarhere]</p>
</li>
<li>
<p>Added conversion between RGB/RGBA/RGBX and LAB <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6647">#6647</a>
[radarhere]</p>
</li>
<li>
<p>Do not attempt normalization if mode is already normal <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6644">#6644</a>
[radarhere]</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/python-pillow/Pillow/commit/d594f4cb8dc47fb0c69ae58d9fff86faae4515bd"><code>d594f4c</code></a> Update CHANGES.rst [ci skip]</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/909dc64ed5f676169aa3d9b0c26f132a06321b83"><code>909dc64</code></a> 9.3.0 version bump</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/1a51ce7b955c65c8f2c6bc7772735b197b8a6aa3"><code>1a51ce7</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6699">#6699</a> from hugovk/security-libtiff_buffer</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/2444cddab2f83f28687c7c20871574acbb6dbcf3"><code>2444cdd</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/python-pillow/Pillow/issues/6700">#6700</a> from hugovk/security-samples_per_pixel-sec</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/744f455830871d61a8de0a5e629d4c2e33817cbb"><code>744f455</code></a> Added release notes</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/0846bfae48513f2f51ca8547ed3b8954fa501fda"><code>0846bfa</code></a> Add to release notes</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/799a6a01052cea3f417a571d7c64cd14acc18c64"><code>799a6a0</code></a> Fix linting</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/00b25fd3ac3648bc28eff5d4c4d816e605e3f05f"><code>00b25fd</code></a> Hide UserWarning in logs</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/05b175ef88c22f5c416bc9b8d5b897dea1abbf2c"><code>05b175e</code></a> Tighter test case</li>
<li><a href="https://github.com/python-pillow/Pillow/commit/13f2c5ae14901c89c38f898496102afd9daeaf6d"><code>13f2c5a</code></a> Prevent DOS with large SAMPLESPERPIXEL in Tiff IFD</li>
<li>Additional commits viewable in <a href="https://github.com/python-pillow/Pillow/compare/9.0.1...9.3.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 11-22-2022 10:22:05 | 11-22-2022 10:22:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,377 | closed | PreTrainedModel: provide a more intuitive way of getting the current size of embeddings | ### Feature request
Please add an intuitive and obvious method to get the size of the current token embeddings on `PreTrainedModel`.
For example, a method called `get_token_embeddings_size()` that can be used as a complement to `resize_token_embeddings()`.
### Motivation
Current API design around the `resize_token_embeddings()` method requires doing the following to increase the size of the token embeddings by 1:
```python
# add 1 new embedding
current_embeddings = my_xformer.resize_token_embeddings(None)
new_embeddings_size = current_embeddings.num_embeddings + 1
my_xformer.resize_token_embeddings(new_embeddings_size)
```
This is counterintuitive and bleeds implementation details to the call site. It requires me to know
1. that calling a "resize" method with the argument `None` returns the object to be resized (which is not intuitive), and
2. that "size" means the property `num_embeddings` on the returned object (admittedly it's not difficult to guess, but it is still a *guess*, and is in fact an implementation detail that I shouldn't need to know).
It would be better if I could do something like this:
```python
# add 1 new embedding
current_embeddings_size = my_xformer.get_token_embeddings_size()
new_embeddings_size = current_embeddings_size + 1
my_xformer.resize_token_embeddings(new_embeddings_size)
```
This provides an intuitively-named method to determine the current "size", and appropriately hides the implementation detail that "size" means the `num_embeddings` property on the object to be resized.
### Your contribution
Here's a proposed implementation:
```python
def get_token_embeddings_size(self) -> int:
return model_embeds.num_embeddings
```
i can make a PR if it would help. | 11-22-2022 10:07:40 | 11-22-2022 10:07:40 | Thanks for flagging, I had kind of the same problem recently when implementing #20043. I used
```
model.get_input_embeddings().weight.shape[0]
```
to get the embedding size, not sure if there is anything easier but having a method that does this would certainly be more helpful if you want to contribute it!
<|||||>PR made<|||||>I don't think you actually opened it, you just created a new branch :-)<|||||>ok PR actually made :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,376 | closed | support `t5` for `text-generation` pipeline | # What does this PR do?
A tentative to support `T5` for `text-generation` pipeline. I don't really expect this PR to get merged as it is very hacky and IMO not a good idea to support `T5` for `text-generation` but I would love to have some insights on what we can potentially do to support `text-generation` pipeline for `T5`
Probably the fix would be also to implement `T5ForCausalLM` but not sure if this makes sense! | 11-22-2022 09:59:39 | 11-22-2022 09:59:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,375 | closed | Raised Exceptions under conditions that are contrary to specified conditions that assert statements | Co-author: @Batese2001
Test file: src/transformers/models/distilbert/modeling_distilbert.py
Local testing: 
| 11-22-2022 09:43:47 | 11-22-2022 09:43:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR. As you can see, it shows a diff of 191 files. Could you open a fresh PR with just your proposed changes?<|||||>Yes, I will do that. Thanks for the feedback! <|||||>I will make a new PR after improving the validity checks on ci/circleci: check_code_quality. |
transformers | 20,374 | closed | [layoutlmv3] SER and RE task combined into one model | ### System Info
datasets==1.15.1
transformers==4.12.5
seqeval==1.2.2
deepspeed==0.5.7
tensorboard==2.7.0
seqeval==1.2.2
sentencepiece
timm==0.4.12
Pillow
einops
textdistance
shapely
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
combine SER and RE task into one model using layoutlmv3
### Expected behavior
Hi @NielsRogge,
Now we want to train SER and RE task using layoutlmv3, but these two models is more heavy for deploy with tensorrt, so to solve this problem, if we combine two tasks into one model, the performance for SER and RE will decrease more compared to the seprated two models? we have no more experience for it, do you give us some advice? thank you very much. | 11-22-2022 09:42:16 | 11-22-2022 09:42:16 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I think it's totally doable to share the Transformer encoder and use 2 separate heads. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this issue as it seems resolved. Feel free to reopen. |
transformers | 20,373 | closed | change the way sentinel tokens can retrived | # What does this PR do?
fixes the issue #19298
@SaulLu | 11-22-2022 09:26:32 | 11-22-2022 09:26:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@SaulLu @sgugger Done<|||||>As pointed above, you're still missing the stronger test to detect the sentinel tokens.<|||||>Thanks a lot for working on this. Maybe I'm just nitpicking, but wouldn't it be great if the returned sentinel tokens were sorted?
```
self.tokenizer.get_sentinel_tokens()
['<extra_id_1>', '<extra_id_26>', '<extra_id_55>', '<extra_id_87>', '<extra_id_50>', '<extra_id_49>', '<extra_id_74>', '<extra_id_66>', '<extra_id_83>', '<extra_id_30>', '<extra_id_3>', '<extra_id_0>', '<extra_id_90>', '<extra_id_14>', '<extra_id_71>', '<extra_id_6>', '<extra_id_18>', '<extra_id_4>', '<extra_id_75>', '<extra_id_99>', '<extra_id_63>', '<extra_id_58>', '<extra_id_48>', '<extra_id_62>', '<extra_id_73>', '<extra_id_20>', '<extra_id_70>', '<extra_id_21>', '<extra_id_38>', '<extra_id_34>', '<extra_id_88>', '<extra_id_28>', '<extra_id_97>', '<extra_id_91>', '<extra_id_65>', '<extra_id_81>', '<extra_id_98>', '<extra_id_23>', '<extra_id_96>', '<extra_id_12>', '<extra_id_19>', '<extra_id_79>', '<extra_id_78>', '<extra_id_68>', '<extra_id_95>', '<extra_id_35>', '<extra_id_42>', '<extra_id_27>', '<extra_id_85>', '<extra_id_67>', '<extra_id_17>', '<extra_id_36>', '<extra_id_93>', '<extra_id_37>', '<extra_id_60>', '<extra_id_77>', '<extra_id_32>', '<extra_id_92>', '<extra_id_33>', '<extra_id_40>', '<extra_id_86>', '<extra_id_53>', '<extra_id_10>', '<extra_id_31>', '<extra_id_72>', '<extra_id_24>', '<extra_id_80>', '<extra_id_13>', '<extra_id_45>', '<extra_id_61>', '<extra_id_52>', '<extra_id_8>', '<extra_id_44>', '<extra_id_57>', '<extra_id_16>', '<extra_id_84>', '<extra_id_51>', '<extra_id_56>', '<extra_id_41>', '<extra_id_64>', '<extra_id_39>', '<extra_id_9>', '<extra_id_25>', '<extra_id_59>', '<extra_id_46>', '<extra_id_7>', '<extra_id_54>', '<extra_id_29>', '<extra_id_89>', '<extra_id_22>', '<extra_id_15>', '<extra_id_43>', '<extra_id_47>', '<extra_id_76>', '<extra_id_5>', '<extra_id_94>', '<extra_id_11>', '<extra_id_82>', '<extra_id_2>', '<extra_id_69>']
``` |
transformers | 20,372 | open | Community contribution - `BetterTransformer` integration for more models! | ## `BetterTransformer` integration for more models!
`BetterTransformer` API provides faster inference on CPU & GPU through a simple interface!
Models can benefit from very interesting speedups using a one liner and by making sure to install the latest version of PyTorch. A complete guideline on how to convert a new model has been created on the [BetterTransformer documentation](https://huggingface.co/docs/optimum/bettertransformer/tutorials/contribute)!
Here is a list of models that could be potentially supported, pick one of the architecture below and let's discuss about the conversion!
Text models ποΈ :
- [x] FSMT - [FSMTEncoderLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/fsmt/modeling_fsmt.py#L397) / @Sumanth077 https://github.com/huggingface/optimum/pull/494
- [ ] MobileBERT - [MobileBertLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/mobilebert/modeling_mobilebert.py#L498) / @raghavanone https://github.com/huggingface/optimum/pull/506
- [x] MBart - [MBartEncoderLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/mbart/modeling_mbart.py#L296) + [M2M100EncoderLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/m2m_100/modeling_m2m_100.py#L345) / https://github.com/huggingface/optimum/pull/516 @ravenouse
- [ ] ProphetNet - [ProphetNetEncoderLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/prophetnet/modeling_prophetnet.py#L1130)
- [x] RemBert - [RemBertLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/rembert/modeling_rembert.py#L415) / @hchings https://github.com/huggingface/optimum/pull/545
- [ ] RocBert - [RocBertLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/roc_bert/modeling_roc_bert.py#LL519C7-L519C19) / @shogohida https://github.com/huggingface/optimum/pull/542
- [ ] RoFormer - [RoFormerLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/roformer/modeling_roformer.py#L448)
- [x] Tapas - [TapasLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/tapas/modeling_tapas.py#L524) / https://github.com/huggingface/optimum/pull/520
Vision models π· :
- [ ] Detr - [DetrLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/detr/modeling_detr.py#L610)
- [ ] Flava - [FlavaLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/flava/modeling_flava.py#L597) / https://github.com/huggingface/optimum/pull/538
- [x] GLPN - [GLPNLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/glpn/modeling_glpn.py#L292) (cannot be supported)
- [x] ViLT - [ViLTLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/vilt/modeling_vilt.py#L472) / https://github.com/huggingface/optimum/pull/508
Audio models π :
- [ ] Speech2Text - [Speech2TextLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L350)
- [ ] NEW: Audio Speech Transformer - [ASTLayer](https://github.com/huggingface/transformers/blob/f2e7d270ec795be09e6187dd2459edb43bd861c1/src/transformers/models/audio_spectrogram_transformer/modeling_audio_spectrogram_transformer.py#L274) / @ravenouse https://github.com/huggingface/optimum/pull/548
Let us also know if you think that some architectures can be supported that we missed. Note that for encoder-decoder based models below, we expect to convert the encoder only.
**Support for decoder-based models coming soon!**
cc @michaelbenayoun @fxmarty
https://github.com/huggingface/optimum/issues/488 | 11-22-2022 08:55:51 | 11-22-2022 08:55:51 | > NotImplementedError: The Better Transformers implementation for the model DebertaV2Model has not beenimplemented yet. Please open an issue requesting the addition of this model with its `BetterTransformer`implementation.
It's not on your list, but would you complain if I did this for DebertaV2Model?<|||||>It is not in the list because `DebertaV2` does not have a regular attention mechanism, so it is not possible to use it with BetterTransformer.<|||||>Yes I second what @michaelbenayoun said, please see related: https://github.com/huggingface/optimum/issues/487<|||||>makes a lot of sense - sorry I should have thought about that a bit harder before posting!<|||||>I noticed that Better Transformers for the T5 model has not been implemented yet. Will it be implemented in the future (if possible)? Thanks.<|||||>Hi @GenVr
Thanks a lot for your reply! Unfortunately `T5` cannot be supported because of the nature of its attention mechanism. In fact `T5` uses attention bias and this is not supported for `BetterTransformer`
Thanks!<|||||>Hi :) I would like to work on the implementation for [RemBertLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/rembert/modeling_rembert.py#L415).
What are the next steps in getting started?
Thank you!<|||||>Hey @RJZauner !
Thanks so much for your interest in helping us integrating more models for `BetterTransformer` !
RemBert seems to use the same attention mechanism as BERT, the only difference should be on the Embedding layer, which is fine for us! So I would say you can move ahead and start forking [optimum](https://github.com/huggingface/optimum) library, create a new branch and open a draft PR. Feel free to have some inspiration from what has been done by https://github.com/huggingface/optimum/pull/494 and https://github.com/huggingface/optimum/pull/508 to see what exactly needs to be done ;) Ping us (myself, @michaelbenayoun & @fxmarty) whenever you feel that you need help!<|||||>Hi @younesbelkada, I would like to work on the easiest of the models mentioned above. Which one do you recommend? What I said might sound a bit weird but I want to tackle a simple one since I'm not very familiar with these models π <|||||>Hello, I would like to tackle the implementation for [TapasLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/tapas/modeling_tapas.py#L524).
May I ask you how I can start the further steps?
Thank you for your time.<|||||>Hi @shogohida and @JuheonChu ,
You can read this [page](https://huggingface.co/docs/optimum/bettertransformer/tutorials/contribute) for learning how to contribute. You can then open a PR with your code, and ask questions there, we will be glad to help!
Also @shogohida, I think they are all similar in terms of difficulty, so do not block on that, maybe choose a model with the modality the most familiar to you.<|||||>Seconding what @michaelbenayoun said, feel free to check some example PRs https://github.com/huggingface/optimum/pull/508 or https://github.com/huggingface/optimum/pull/494 for reference!
@shogohida , you can take RocBERT, actually it copies from Bert so the conversion will be very easy :) <|||||>Thanks guys for your replies! I will take RocBERT then!<|||||>Thanks @michaelbenayoun ! I will take TapasLayer !<|||||>Hi! Thank you so much for opening this issue.
1. I was implementing the RemBERT and had some questions. But then I noticed that @RJZauner had already been working on that. I am going to hold my work on that and I am looking forward to see RJZauner's implementations!
2. I will work on the mBART.
3. I also found some dead links and some points unclear on this [page](https://huggingface.co/docs/optimum/bettertransformer/tutorials/contribute). How should I report and help to solve the problems I found? <|||||>Hello @younesbelkada,
I would like to take [DetrLayer](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/detr/modeling_detr.py#L610). Nice tutorial btw π<|||||>Hi @blakechi !
Sure you can take it ;) let me know if you need help opening a PR!<|||||>Hi @ravenouse !
Thanks for your help! Yes you can take MBART ;)
Regarding the dead link could you open an issue at optimum?
Thanks!<|||||>
> Hey @RJZauner !
> Thanks so much for your interest in helping us integrating more models for `BetterTransformer` !
> RemBert seems to use the same attention mechanism as BERT, the only difference should be on the Embedding layer, which is fine for us! So I would say you can move ahead and start forking [optimum](https://github.com/huggingface/optimum) library, create a new branch and open a draft PR. Feel free to have some inspiration from what has been done by [huggingface/optimum#494](https://github.com/huggingface/optimum/pull/494) and [huggingface/optimum#508](https://github.com/huggingface/optimum/pull/508) to see what exactly needs to be done ;) Ping us (myself, @michaelbenayoun & @fxmarty) whenever you feel that you need help!
Thank you for the info!<|||||>Hello @michaelbenayoun and @younesbelkada !
First time contributing for me :)
I would like to handle the implementation for [Speech2Text](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L350)
What are the first steps ? Create a PR ?
Thanks in advance.<|||||>> Hello @michaelbenayoun and @younesbelkada !
>
> First time contributing for me :)
>
> I would like to handle the implementation for [Speech2Text](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L350)
>
> What are the first steps ? Create a PR ?
>
> Thanks in advance.
Hello, I am absolutely sure that they will give you a better suggestion than what I have.
I would like to share that it is good to read `CONTRIBUTING.md` in the transformer repository.
I read through every content very carefully and made my first contribution!<|||||>> > Hello @michaelbenayoun and @younesbelkada !
> > First time contributing for me :)
> > I would like to handle the implementation for [Speech2Text](https://github.com/huggingface/transformers/blob/95754b47a6d4fbdad3440a45762531e8c471c528/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L350)
> > What are the first steps ? Create a PR ?
> > Thanks in advance.
>
> Hello, I am absolutely sure that they will give you a better suggestion than what I have. I would like to share that it is good to read `CONTRIBUTING.md` in the transformer repository. I read through every content very carefully and made my first contribution!
Hello @JuheonChu :)
I am definitely have a look at it ! thanks<|||||>Hi @lucaspct,
Yes the first steps would be to read [the guide explaining how to contribute to `optimum.bettertransformer`](https://huggingface.co/docs/optimum/bettertransformer/tutorials/contribute), and then opening a [PR on Optimum](https://github.com/huggingface/optimum/pulls), we will support you there!<|||||>Hi @younesbelkada @michaelbenayoun I'd love to take on the RoFormer model if it isn't claimed yet. Will open a PR after I read through the guide!<|||||>I would like to take a crack at the ProphetNet encoder if it has not been claimed yet <|||||>Thank you very much @miyu386 & @adit299 !
Of course yes you can give a try on that ;) feel free to start to open a PR on `optimum` and we'll guide you from there πͺ <|||||>I would like to work on the `ASTLayer` if no one has taken it!<|||||>Hi @younesbelkada I'd like to tackle the `FlavaLayer` if it has not been taken!<|||||>Hi @katiele47
Sure no problem! Feel free to open a PR and tag us there! I will update the table above once the PRs are open ;) <|||||>Hi, @younesbelkada I'd like to take `GLPNLayer` if no one has claimed it. will open the PR soon for this :)<|||||>Hi @hazrulakmal !
Sure! Let me know once you open the PR ;) <|||||>Hi! I'd love to contribute wherever I can be useful.<|||||>Hi @younesbelkada !
I found the `torch._transformer_encoder_layer_fwd()` function is called to execute forwarding in our `BetterTransformerBaseLayer`. To better understand what's going on under the hood, I searched this function online but didn't find much information about it. Could you tell me where I can check the source code of it?
Thank you so much! <|||||>Hello,
I'd like to work on `RocBert` layer. I'll go over the contributing guide and open a PR. Anything else I need to go through as a starting point?<|||||>Hi @younesbelkada, I added a [PR](https://github.com/huggingface/optimum/pull/545) for RemBERT. Since RemBERT's primary changes are in embeddings (decoupling in/output embeddings, increasing output embeddings during pre-training to improve downstream performance, etc), the needed changes should be straightforward. But please kindly let me know if I missed anything.
I also want to apologize to @RJZauner, I realized you've claimed RemBERT after re-reading this thread now. I'll be more careful next time!! And lmk if you want me to withdraw the PR to use your changes instead. If not, feel free to add on top of it and hopefully we can both learn together ππ.<|||||>Hi @M0315G
Thanks so much! Unfortunately this is already a WIP from @shogohida here: https://github.com/huggingface/optimum/pull/542
Feel free to take another model that is free ;) also more models will be added in the list as `transformers` is continuing integrating more models<|||||>Should I take RoFormer then?<|||||>Yes, I think that you can take this one. From my understanding of `RoFormer` it consists of a LM similar as BERT, but uses Rotary positional embedding and the attention mechanism is classic, i.e. similar as BERT. Ideally you can double check that too but for me `BetterTransformer` should be supported on `RoFormer` so I would say that you can take it yes<|||||>Understood. Will go through the docs and open a PR in optimum. Any other things I should take care of?<|||||>Just take some inspiration from other PRs and everything should go smoothly!<|||||>Hi @ravenouse !
From what I got, this function is a C++ binding of the transformer encoder operation that is first defined [here](https://github.com/pytorch/pytorch/blob/226e803ecb483d2488f2381afb86c1e849814714/aten/src/ATen/native/native_functions.yaml#L13272) and fully defined [here](https://github.com/pytorch/pytorch/blob/226e803ecb483d2488f2381afb86c1e849814714/aten/src/ATen/native/transformers/transformer.cpp#L64) as you can see, the whole transformer encoder operations (self attention + ffn) is defined in a single operation<|||||>Hi @hchings !
Just reviewed your PR ;) will merge that soon!
<|||||>Hi @blakechi I saw that you wanted to work on DertLayer a week ago ? How is your progress now? Just asking cus If you are not working on it anymore, Iβm more than happy to help you out with the conversion and take over the PR? :)<|||||>Hi @hazrulakmal
I realized that [DPT model ](https://github.com/huggingface/transformers/blame/6a707cf586f37865d1a9fac02c56c43cf8dcf979/src/transformers/models/dpt/modeling_dpt.py#L311)(Depth Estimation model) can be supported too as it uses ViT as a backbone. Would you like to give it a try with this one instead? π
We are currently looking at speeding up this model so this will be a nice addition<|||||>Hi @younesbelkada, is there any model left that I can work on? Seems like the Detr was claimed a while ago but not sure if there is a PR opened for that. <|||||>@younesbelkada yup! definitely. I can take this up! I checked on the encoder layer and it looks possible to integrate with BT, should I name the new class as `DPTViTLayerBetterTransformer`?
```
(encoder): DPTViTEncoder(
(layer): ModuleList(
(0): DPTViTLayer(
(attention): DPTViTAttention(
(attention): DPTViTSelfAttention(
(query): Linear(in_features=32, out_features=32, bias=True)
(key): Linear(in_features=32, out_features=32, bias=True)
(value): Linear(in_features=32, out_features=32, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(output): DPTViTSelfOutput(
(dense): Linear(in_features=32, out_features=32, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
)
(intermediate): DPTViTIntermediate(
(dense): Linear(in_features=32, out_features=37, bias=True)
(intermediate_act_fn): GELUActivation()
)
(output): DPTViTOutput(
(dense): Linear(in_features=37, out_features=32, bias=True)
(dropout): Dropout(p=0.1, inplace=False)
)
(layernorm_before): LayerNorm((32,), eps=1e-12, elementwise_affine=True)
(layernorm_after): LayerNorm((32,), eps=1e-12, elementwise_affine=True)
)
```
<|||||>Very cool! Yes you can name it like this ;) Looking forward to seeing your PR πͺ <|||||>Hi @miyu386
ViT-Hybrid has just been integrated to `transformers` would you like to take this one?
https://github.com/huggingface/transformers/blob/d151a8c55032d5a21800ea0813c4304af8b8e9f7/src/transformers/models/vit_hybrid/modeling_vit_hybrid.py#L362<|||||>> Hi @younesbelkada, I added a [PR](https://github.com/huggingface/optimum/pull/545) for RemBERT. Since RemBERT's primary changes are in embeddings (decoupling in/output embeddings, increasing output embeddings during pre-training to improve downstream performance, etc), the needed changes should be straightforward. But please kindly let me know if I missed anything.
>
> I also want to apologize to @RJZauner, I realized you've claimed RemBERT after re-reading this thread now. I'll be more careful next time!! And lmk if you want me to withdraw the PR to use your changes instead. If not, feel free to add on top of it and hopefully we can both learn together ππ.
Hey :) don't sweat it - no need to withdraw your PR.
Your implementation looks great - thanks!<|||||>@younesbelkada Yes, I'd like to give it a try! Thanks<|||||>May I ask you a question if there is anyway that we can see a "BetterTransformer" features for models that already have like Tapas or FSMT?<|||||>Hi @JuheonChu , you can see it here: https://github.com/huggingface/optimum/pull/520/files , https://github.com/huggingface/optimum/pull/494/files<|||||>Hey @younesbelkada, happy to take a look at any remaining model that needs integration!<|||||>Same here @younesbelkada, any model you need<|||||>Hi @HVjay @mszsorondo don't hesitate to pick any model you are interested in and is using a classic encoder attention + feed-forward architecture! You can open an PR in Optimum, and if you need help we'll guide you from there.<|||||>> Understood. Will go through the docs and open a PR in optimum. Any other things I should take care of?
Hi @younesbelkada ,
Opened a PR for RoFormer integration for BetterTransformer form my other Github Account. But please kindly let me know if I missed anything.<|||||>Hi @fxmarty wanted to confirm that Conditional Detr - [`ConditionalDetrEncoderLayer`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/conditional_detr/modeling_conditional_detr.py#L763) can be supported!<|||||>Hi @HVjay,
Thanks for your interest! I think Detr can be supported as well as ConditionalDetr as it seems to use classic attention mechanism - this can be also confirmed by [the paper]( https://arxiv.org/pdf/2005.12872.pdf ) that states that the method uses classic transformer-based models. However note that only the encoder part can be converted.
Hi @mszsorondo,
Thank you for your message! Recently `BLIP` has been added, the model should support BetterTransformer integration (Vision + text)<|||||>Hi @HVjay ,
Actually there is already someone working on Detr, check: https://github.com/huggingface/optimum/pull/684 <|||||>Hi @younesbelkada , could I pick up RoFormer ?<|||||>@sushmanthreddy are you doing Detr anymore...? if doing please tell<|||||>@younesbelkada Hi π could I take `Speech2Text` π
<|||||>@younesbelkada Hello, I would love to contribute to this issue. I am new to contributing in transformers. Can you please tell me which of the model layers are vacant I would like to take one up :)<|||||>@younesbelkada I would like to work on Detr.
@mszsorondo Are you still working on it? There has not been any activity on your [PR](https://github.com/huggingface/optimum/pull/684) since Jan 8. I can pull from your PR and fix the failing tests.
<|||||>@awinml I actaully submitted the pr for Detr Model
- so i forgot to mention,sorry buddy
- you can look for other model available
- [here is the pr](https://github.com/huggingface/optimum/pull/1022)<|||||>@dewasahu2003 No problem.
Its always better to inform the original author and pull from their PR so they get due credit. Hence the question was aimed at @mszsorondo.<|||||>@younesbelkada Hey π
I have submitted the pr for `BetterTransformer` for detr
I mentioned you there [PR](https://github.com/huggingface/optimum/pull/1022)
From next time i would keep in mind to ask pr authors<|||||>Hi, @younesbelkada I'd like to work on `ProphetNet` π<|||||>> @younesbelkada I would like to work on Detr.
>
> @mszsorondo Are you still working on it? There has not been any activity on your [PR](https://github.com/huggingface/optimum/pull/684) since Jan 8. I can pull from your PR and fix the failing tests.
Go for it! Sorry for the delay<|||||>Hi @younesbelkada, @michaelbenayoun, and @fxmarty,
I would like to work on Speech2TextLayer.
What are the next steps in getting started?
Thank you!<|||||>Hi! @younesbelkada @michaelbenayoun @fxmarty
I'm interested in adding support for one of the models in the list. Although, I believe that the only model left might be Speech2TextLayer and has been claimed by @Jack-Chuang <|||||>Hello @younesbelkada @fxmarty and @michaelbenayoun
I would like to work on the `RoFormer` layer since I saw that someone had already worked on `ProphetNet`. Has the model been claimed ?<|||||>Hello @younesbelkada @fxmarty and @michaelbenayoun
I would love to help you with the integration of more models for `BetterTransformer`! I'm happy to take what is left since a lot of developers are already contributing to most of the models I think. Let me know if I can still help with something!
<|||||>@younesbelkada is there anything I can help with in this issue? |
transformers | 20,371 | closed | Raised specific types of exceptions on distilbert model | Co-author: Batese @batese2001
Local test file: 
| 11-22-2022 08:50:07 | 11-22-2022 08:50:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I made a different PR with the same request here: https://github.com/huggingface/transformers/pull/20375
|
transformers | 20,370 | closed | Hugging face community on fb group | Hello everyone you can join our group to discuss new state-of-the-art methods in deep learning, machine learning, and natural language processing through the below :
[link](https://fb.me/g/p_Qe5xCLF7zxgu4sYn/NRWMjdeM)
| 11-22-2022 08:34:53 | 11-22-2022 08:34:53 | Hey there! Thanks a lot for creating and sharing the group :fire:
FYI, there is also a community Discord server with over 16000 members. You can join in [hf.co/join/discord](http://hf.co/join/discord) :hugs: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,369 | closed | typo | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-22-2022 08:05:23 | 11-22-2022 08:05:23 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20369). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.