repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 19,566 | closed | [Doctest] Add `configuration bloom.py` | Bloom config update
Based on issue #19487 | 10-13-2022 08:28:03 | 10-13-2022 08:28:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @ydshieh! It is ready for review :) |
transformers | 19,565 | closed | Fix `ImageToTextPipelineTests.test_small_model_tf` | # What does this PR do?
`ImageToTextPipelineTests::test_[small/large]_model_[pt/tf]` were all skipped before. I believe it was enabled after #19366 (or its child commit - BTW, thank you @sgugger !).
We have to update the wrong expected values. | 10-13-2022 07:35:29 | 10-13-2022 07:35:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,564 | closed | [Doctest] Add `configuration_yoso.py` | # What does this PR do?
Add configuration_yoso.py to utils/documentation_tests.txt for doctest.
Based on issue #19487
@sgugger could you please check it?
Thanks :)
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-13-2022 06:57:08 | 10-13-2022 06:57:08 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@grgkaran03 You mixed 2 PRs together. ViTMAE and YOSO, and there is another PR #19567 doing the same changes. |
transformers | 19,563 | closed | [Doctest] Add `configuration_roberta.py` | Add `configuration_roberta.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@ydshieh could you take a look at it?
Thanks :) | 10-13-2022 06:53:33 | 10-13-2022 06:53:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>haha, no worries 😄 |
transformers | 19,562 | closed | [Doctest] Add `configuration_reformer.py` | Add `configuration_reformer.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@ydshieh could you check it?
Thanks :) | 10-13-2022 06:53:11 | 10-13-2022 06:53:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,561 | closed | [Doctest] Add `configuration_vit.py` | Add `configuration_vit.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@ydshieh could you please take a look at it?
Thanks :) | 10-13-2022 06:52:56 | 10-13-2022 06:52:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,560 | closed | [Doctest] Add `configuration_deit.py` | Add `configuration_deit.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@ydshieh could you please check it?
Thanks :) | 10-13-2022 06:52:49 | 10-13-2022 06:52:49 | |
transformers | 19,559 | closed | Fix `test_tf_encode_plus_sent_to_model` for `TAPAS` | # What does this PR do?
`TapasTokenizer.encode_plus` requires a `table` argument. Currently, this test calls `TokenizerTesterMixin.test_tf_encode_plus_sent_to_model`, therefore it didn't provide this argument and fails.
This PR completely overwrites this test in `TapasTokenizationTest`, just like the pytorch one `test_torch_encode_plus_sent_to_model`. | 10-13-2022 06:01:19 | 10-13-2022 06:01:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,558 | closed | [Doctest] - Fixing doctest bert_generation configuration | # What does this PR do?
Fixing doctest in bert_generation configuration. Issue : #19487
## Fixes # (issue)
Added (`with random weights)` in `modeling_ber_generation.py`.
Added `modeling_ber_generation.py` in `documentation_tests.txt`
## Who can review?
@ydshieh @sgugger
| 10-13-2022 04:58:57 | 10-13-2022 04:58:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,557 | closed | Fixing mobile bert configuration doctest | # What does this PR do?
Add configuration_mobilebert.py to utils/documentation_tests.txt for doctest.
Based on issue #19487
@sgugger / @ydshieh
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-13-2022 04:08:17 | 10-13-2022 04:08:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@RamitPahwa There is an extra empty line in `configuration_mobilebert.py` that causes the tests to fail.
It would work nicely once you remove it =)<|||||>@daspartho Thank for the help, should work now !<|||||>And thanks @daspartho for the help 💯 |
transformers | 19,556 | closed | Fixing the Doctest for imageGPT config | # What does this PR do?
Add configuration_imagegpt.py to utils/documentation_tests.txt for doctest.
Based on issue #19487
@sgugger / @ydshieh
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-13-2022 03:50:20 | 10-13-2022 03:50:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,555 | closed | add gloo backend support for CPU DDP | Signed-off-by: Wang, Yi A <[email protected]>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-13-2022 01:29:46 | 10-13-2022 01:29:46 | @sgugger @yao-matrix please help review<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,554 | closed | Any example for Wav2vec2ForXVector training? | Hello, I'm trying to train Wav2vec2ForXVector model on setting like below. But the training loss is not falling from 2.3~2.7. Is there any example for Wav2vec2ForXVector training? Or had anyone
Experienced like this?
pretrained_model : korean wav2vec2
num of audio : 2300k
num of speaker : 11223
num of used encoder layer : 1
output_xvector_dim : 512
learning rate : 2e-5
batch size : 512

| 10-12-2022 23:37:15 | 10-12-2022 23:37:15 | There isn't currently an example for XVector training in Transformers! Would you like to contribute this? You can begin simply by opening a PR with the python script that you're using. We can then iterate on it to verify correctness and hopefully get a successfully trained XVector system!
Probably also worth asking the same question on the forum to boost visibility: https://discuss.huggingface.co
Also cc @anton-l who has a speaker verification (SV) checkpoint on the Hub (https://huggingface.co/anton-l/wav2vec2-base-superb-sv), wondering if you had a local script for XVector fine-tuning?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, sorry for missing this! To answer @sanchit-gandhi's question: my SV checkpoint is a ported version of W2V2+XVector from S3PRL: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream/sv_voxceleb1
So no finetuning scripts yet, just inference <|||||>Hey @LEECHOONGHO! If you want to work together to get a working XVector training script, feel free to open a PR with the script that you've got and tag me. We can iterate on it, ensuring correctness and building up to a full Transformers examples script! I think this would be of benefit to others in the community 🤗<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,553 | closed | [WIP] Better Transformers integrations for BERT | # What does this PR do?
This PR, is the first PR from a series that add[ PyTorch Better Transformers ](https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/)support to Bert model for inference speed ups.
# Usage
`model = AutoModelForSequenceClassification.from_pretrained("bert-large-cased").eval().to(device)
model.bert.encoder.to_fast()
`
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. -- Offline discussions
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? -- In progress
## Who can review?
@LysandreJik @younesbelkada
| 10-12-2022 23:12:59 | 10-12-2022 23:12:59 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,552 | closed | fix flaubert tokenizer | # What does this PR do?
Fix an issue from #19330, see the comments in the changes.
Current test failure [here](https://github.com/huggingface/transformers/actions/runs/3231701081/jobs/5291558259)
```bash
E if self.do_lowercase:
E AttributeError: 'FlaubertTokenizer' object has no attribute 'do_lowercase'
```
| 10-12-2022 22:00:43 | 10-12-2022 22:00:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,551 | closed | GPTTokenizer dependency removed from deberta class | # What does this PR do?
Hi @sgugger, I am raising this clean PR for PR #19421
Related to #19303 ,
- the GPT2Tokenizer dependency has been removed from DebertaTokenizer
- the GPT2TokenizerFast dependency has been removed from DebertaTokenizerFast
I ran` pytest tests/models/deberta/test_tokenization_deberta.py` which passed
Thanks for reviewing!
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 20:04:11 | 10-12-2022 20:04:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,550 | closed | [Doctest] Add configuration_big_bird.py | Hi!
Updating configuration_big_bird.py
Based on issue https://github.com/huggingface/transformers/issues/19487
Tests passed

| 10-12-2022 19:41:28 | 10-12-2022 19:41:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey, there seems to be a problem with this line

Check_code_quality is returning this error:

When I tried to keep it below 119 in length and move some of the text to the next line I got another error

Thus I can not add this 'with random weights' text and I am wondering how to solve this problem.
|
transformers | 19,549 | closed | [Doctest] Add `configuration_gpt2.py` | Add `configuration_gpt2.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@sgugger could you please check it?
Thanks :) | 10-12-2022 19:38:37 | 10-12-2022 19:38:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,548 | closed | [Whisper] Fix gradient checkpointing (again!) | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/19537#issuecomment-1276629836
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 19:21:31 | 10-12-2022 19:21:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,547 | closed | Fix checkpoint in `MarkupLMConfig` | # What does this PR do?
Fix checkpoint in `MarkupLMConfig`
| 10-12-2022 18:57:14 | 10-12-2022 18:57:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,546 | closed | [Doctest] Add configuration_big_bird.py | Hi!
Updating configuration_big_bird.py
Based on issue https://github.com/huggingface/transformers/issues/19487
| 10-12-2022 18:48:11 | 10-12-2022 18:48:11 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,545 | closed | added type hints for Yolos Pytorch model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 18:42:13 | 10-12-2022 18:42:13 | @sgugger can you please help?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This looks good, but a few comments!
1) You'll need to run `make fixup` to ensure our code style stays consistent. You might need to install the dev dependencies for that, [see here](https://huggingface.co/docs/transformers/contributing#start-contributing-pull-requests)
2) We'd prefer to just use the built-in `bool` rather than `traitlets.Bool`
3) The most important methods to add type hints to are the `forward()` methods on the main model classes (e.g. in `modeling_yolos.py`). Still, I'll totally accept PRs covering other methods!<|||||>> This looks good, but a few comments!
>
> 1. You'll need to run `make fixup` to ensure our code style stays consistent. You might need to install the dev dependencies for that, [see here](https://huggingface.co/docs/transformers/contributing#start-contributing-pull-requests)
> 2. We'd prefer to just use the built-in `bool` rather than `traitlets.Bool`
> 3. The most important methods to add type hints to are the `forward()` methods on the main model classes (e.g. in `modeling_yolos.py`). Still, I'll totally accept PRs covering other methods!
hey, i installed the dev dependencies but make fixup is giving error: no such file or directory. I'll totally try to cover the methods you mentioned in my next pr :)
<|||||>Alright, let me see if I can run it here!<|||||>> Looks good to me now - I deleted the `traitlets.Bool` thing, but aside from that I'm happy with it!
Thanks man, It feels good finally get my first PR on transformers, looking forward to contributing more |
transformers | 19,544 | closed | Add normalize to image transforms module | # What does this PR do?
Adds `normalize` to the image transforms modules, as well as a helper utility function `get_channel_dimension_axis`.
* `normalize`: performs equivalent normalization of an image as previous feature extractors
* `get_channel_dimension_axis`: Helper function which returns which axis number the channel dimension is on.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? | 10-12-2022 18:41:53 | 10-12-2022 18:41:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,543 | closed | added type hints for Yolos Pytorch model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 18:33:12 | 10-12-2022 18:33:12 | |
transformers | 19,542 | closed | [Doctest] Add `configuration_beit.py` | Add `configuration_beit.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@sgugger could you take a look at it?
Thanks :) | 10-12-2022 18:18:57 | 10-12-2022 18:18:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,541 | closed | Albert config update | Hey!
# What does this PR do?
Updates Albert config following the below issue (note that the first step is already done, thus no change was made)

Tests passed

| 10-12-2022 17:42:09 | 10-12-2022 17:42:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,540 | closed | [Doctest] `Add configuration_whisper.py` | Add `configuration_whisper.py` to `utils/documentation_tests.txt` for doctest.
Based on issue https://github.com/huggingface/transformers/issues/19487
@sgugger could you please check it?
Thanks :) | 10-12-2022 17:39:19 | 10-12-2022 17:39:19 | @daspartho There is an extra empty line which fails the tests. Could you remove it? Thanks.
In fact, you can use `make style` to see the necessary change(s).<|||||>@ydshieh removed it, should work nicely now :)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,539 | closed | [Doctest] Add `configuration_yolos.py` | Add `configuration_yolos.py` to `utils/documentation_tests.txt` for doctest.
Based on issue #19487
@sgugger could you please take a look at it?
Thanks =) | 10-12-2022 17:26:18 | 10-12-2022 17:26:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,538 | closed | [Whisper] Fix gradient checkpointing | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19537
Sanity check:
```python
from transformers import WhisperFeatureExtractor, WhisperConfig, WhisperForConditionalGeneration
import numpy as np
feature_extractor = WhisperFeatureExtractor()
config = WhisperConfig()
model_encoder = WhisperForConditionalGeneration(config).model.encoder
# enable checkpointing
model_encoder.gradient_checkpointing_enable()
# create dummy audio input
sample = {"array": np.ones(1000), "sampling_rate": 16000}
# pre-process audio input
inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
# forward pass
outputs = model_encoder(inputs)
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 16:52:21 | 10-12-2022 16:52:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks! |
transformers | 19,537 | closed | [Whisper] Gradient checkpointing fails | ### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.0
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
cc @ArthurZucker for info
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import WhisperFeatureExtractor, WhisperConfig, WhisperForConditionalGeneration
import numpy as np
feature_extractor = WhisperFeatureExtractor()
config = WhisperConfig()
model_encoder = WhisperForConditionalGeneration(config).model.encoder
# enable checkpointing
model_encoder.gradient_checkpointing_enable()
# create dummy audio input
sample = {"array": np.ones(1000), "sampling_rate": 16000}
# pre-process audio input
inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
# forward pass
outputs = model_encoder(inputs)
```
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Input In [1], in <cell line: 18>()
15 inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
17 # forward pass
---> 18 outputs = model_encoder(inputs)
File ~/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1131, in Module._call_impl(self, *input, **kwargs)
1127 # If we don't have any hooks, we want to skip the rest of the logic in
1128 # this function, and just call forward.
1129 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1130 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1131 return forward_call(*input, **kwargs)
1132 # Do not call functions when jit is used
1133 full_backward_hooks, non_full_backward_hooks = [], []
File ~/transformers/src/transformers/models/whisper/modeling_whisper.py:682, in WhisperEncoder.forward(self, input_features, head_mask, output_attentions, output_hidden_states, return_dict)
678 return module(*inputs, output_attentions)
680 return custom_forward
--> 682 layer_outputs = torch.utils.checkpoint.checkpoint(
683 create_custom_forward(encoder_layer),
684 hidden_states,
685 None,
686 (head_mask[idx] if head_mask is not None else None),
687 )
688 else:
689 layer_outputs = encoder_layer(
690 hidden_states,
691 None,
692 layer_head_mask=(head_mask[idx] if head_mask is not None else None),
693 output_attentions=output_attentions,
694 )
AttributeError: module 'torch.utils' has no attribute 'checkpoint'
```
### Expected behavior
Forward pass without any hitch-ups! | 10-12-2022 16:43:49 | 10-12-2022 16:43:49 | Jumped the gun 😅 Doesn't quite yet work with the decoder!
```python
from transformers import WhisperFeatureExtractor, WhisperConfig, WhisperForConditionalGeneration
import numpy as np
import torch
feature_extractor = WhisperFeatureExtractor()
config = WhisperConfig()
model = WhisperForConditionalGeneration(config)
# enable checkpointing
model.gradient_checkpointing_enable()
# create dummy audio input
sample = {"array": np.ones(1000), "sampling_rate": 16000}
# pre-process audio input
inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
# create dummy decoder input ids
decoder_input_ids = torch.arange(10).reshape(1, 10) # bsz, seq_len = (1, 10)
# forward pass
outputs = model(inputs, decoder_input_ids=decoder_input_ids)
```
<details>
<summary> Traceback </summary>
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [2], in <cell line: 22>()
19 decoder_input_ids = torch.arange(10).reshape(1, 10) # bsz, seq_len = (1, 10)
21 # forward pass
---> 22 outputs = model(inputs, decoder_input_ids=decoder_input_ids)
File ~/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1131, in Module._call_impl(self, *input, **kwargs)
1127 # If we don't have any hooks, we want to skip the rest of the logic in
1128 # this function, and just call forward.
1129 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1130 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1131 return forward_call(*input, **kwargs)
1132 # Do not call functions when jit is used
1133 full_backward_hooks, non_full_backward_hooks = [], []
File ~/transformers/src/transformers/models/whisper/modeling_whisper.py:1168, in WhisperForConditionalGeneration.forward(self, input_features, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1163 if decoder_input_ids is None:
1164 decoder_input_ids = shift_tokens_right(
1165 labels, self.config.pad_token_id, self.config.decoder_start_token_id
1166 )
-> 1168 outputs = self.model(
1169 input_features,
1170 decoder_input_ids=decoder_input_ids,
1171 encoder_outputs=encoder_outputs,
1172 decoder_attention_mask=decoder_attention_mask,
1173 head_mask=head_mask,
1174 decoder_head_mask=decoder_head_mask,
1175 cross_attn_head_mask=cross_attn_head_mask,
1176 past_key_values=past_key_values,
1177 decoder_inputs_embeds=decoder_inputs_embeds,
1178 use_cache=use_cache,
1179 output_attentions=output_attentions,
1180 output_hidden_states=output_hidden_states,
1181 return_dict=return_dict,
1182 )
1183 lm_logits = self.proj_out(outputs[0])
1185 loss = None
File ~/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1131, in Module._call_impl(self, *input, **kwargs)
1127 # If we don't have any hooks, we want to skip the rest of the logic in
1128 # this function, and just call forward.
1129 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1130 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1131 return forward_call(*input, **kwargs)
1132 # Do not call functions when jit is used
1133 full_backward_hooks, non_full_backward_hooks = [], []
File ~/transformers/src/transformers/models/whisper/modeling_whisper.py:1044, in WhisperModel.forward(self, input_features, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
1037 encoder_outputs = BaseModelOutput(
1038 last_hidden_state=encoder_outputs[0],
1039 hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None,
1040 attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None,
1041 )
1043 # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn)
-> 1044 decoder_outputs = self.decoder(
1045 input_ids=decoder_input_ids,
1046 attention_mask=decoder_attention_mask,
1047 encoder_hidden_states=encoder_outputs[0],
1048 head_mask=decoder_head_mask,
1049 cross_attn_head_mask=cross_attn_head_mask,
1050 past_key_values=past_key_values,
1051 inputs_embeds=decoder_inputs_embeds,
1052 use_cache=use_cache,
1053 output_attentions=output_attentions,
1054 output_hidden_states=output_hidden_states,
1055 return_dict=return_dict,
1056 )
1058 if not return_dict:
1059 return decoder_outputs + encoder_outputs
File ~/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1131, in Module._call_impl(self, *input, **kwargs)
1127 # If we don't have any hooks, we want to skip the rest of the logic in
1128 # this function, and just call forward.
1129 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1130 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1131 return forward_call(*input, **kwargs)
1132 # Do not call functions when jit is used
1133 full_backward_hooks, non_full_backward_hooks = [], []
File ~/transformers/src/transformers/models/whisper/modeling_whisper.py:912, in WhisperDecoder.forward(self, input_ids, attention_mask, encoder_hidden_states, head_mask, cross_attn_head_mask, past_key_values, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
908 return module(*inputs, output_attentions, use_cache)
910 return custom_forward
--> 912 layer_outputs = torch.utils.checkpoint.checkpoint(
913 create_custom_forward(decoder_layer),
914 hidden_states,
915 attention_mask,
916 encoder_hidden_states,
917 head_mask[idx] if head_mask is not None else None,
918 cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None,
919 None,
920 )
921 else:
923 layer_outputs = decoder_layer(
924 hidden_states,
925 attention_mask=attention_mask,
(...)
933 use_cache=use_cache,
934 )
File ~/venv/lib/python3.8/site-packages/torch/utils/checkpoint.py:235, in checkpoint(function, use_reentrant, *args, **kwargs)
232 raise ValueError("Unexpected keyword arguments: " + ",".join(arg for arg in kwargs))
234 if use_reentrant:
--> 235 return CheckpointFunction.apply(function, preserve, *args)
236 else:
237 return _checkpoint_without_reentrant(
238 function,
239 preserve,
240 *args
241 )
File ~/venv/lib/python3.8/site-packages/torch/utils/checkpoint.py:96, in CheckpointFunction.forward(ctx, run_function, preserve_rng_state, *args)
93 ctx.save_for_backward(*tensor_inputs)
95 with torch.no_grad():
---> 96 outputs = run_function(*args)
97 return outputs
File ~/transformers/src/transformers/models/whisper/modeling_whisper.py:908, in WhisperDecoder.forward.<locals>.create_custom_forward.<locals>.custom_forward(*inputs)
906 def custom_forward(*inputs):
907 # None for past_key_value
--> 908 return module(*inputs, output_attentions, use_cache)
File ~/venv/lib/python3.8/site-packages/torch/nn/modules/module.py:1131, in Module._call_impl(self, *input, **kwargs)
1127 # If we don't have any hooks, we want to skip the rest of the logic in
1128 # this function, and just call forward.
1129 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1130 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1131 return forward_call(*input, **kwargs)
1132 # Do not call functions when jit is used
1133 full_backward_hooks, non_full_backward_hooks = [], []
File ~/transformers/src/transformers/models/whisper/modeling_whisper.py:397, in WhisperDecoderLayer.forward(self, hidden_states, attention_mask, encoder_hidden_states, encoder_attention_mask, layer_head_mask, cross_attn_layer_head_mask, past_key_value, output_attentions, use_cache)
393 hidden_states = self.self_attn_layer_norm(hidden_states)
395 # Self Attention
396 # decoder uni-directional self-attention cached key/values tuple is at positions 1,2
--> 397 self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None
398 # add present self-attn cache to positions 1,2 of present_key_value tuple
399 hidden_states, self_attn_weights, present_key_value = self.self_attn(
400 hidden_states=hidden_states,
401 past_key_value=self_attn_past_key_value,
(...)
404 output_attentions=output_attentions,
405 )
TypeError: 'bool' object is not subscriptable
```
</details>
Need to pass `encoder_attention_mask` and `past_key_value` correctly to the decoder layer... |
transformers | 19,536 | closed | Added type hints to `DebertaV2ForMultipleChoice` Pytorch | Type Hints for DebertaV2ForMultipleChoice | 10-12-2022 16:36:17 | 10-12-2022 16:36:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 I have added the output type<|||||>@IMvision12 Looks perfect, thanks! |
transformers | 19,535 | closed | Throw an error if `getattribute_from_module` can't find anything | # What does this PR do?
Throw an error if `getattribute_from_module` can't find anything - to avoid `RecursionError: maximum recursion depth exceeded while calling a Python object`.
**New error:**
```bash
ValueError: Could not find MarkupLMForMaskedLM neither <module 'transformers.models.markuplm' from '/home/yih_dar_huggingface_co/transformers-ydshieh/src/transformers/models/markuplm/__init__.py'> in nor in <module 'transformers' from 'src/transformers/__init__.py'>!
``` | 10-12-2022 16:09:53 | 10-12-2022 16:09:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,534 | closed | Remove `MarkupLMForMaskedLM` from `MAPPING` | # What does this PR do?
There is no `MarkupLMForMaskedLM`.
BTW, could we check if the arguments passed to the recursive call are identical to the inputs: if so, don't call recursively.
https://github.com/huggingface/transformers/blob/4edb3e49f6bd3d1a4f6862452ecaf07108d62ff7/src/transformers/models/auto/auto_factory.py#L548-L558
I can work on that if you are OK, @sgugger .
I freak out when I see
```bash
RecursionError: maximum recursion depth exceeded while calling a Python object
``` | 10-12-2022 16:01:51 | 10-12-2022 16:01:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,533 | closed | Add typing to activations.py | # What does this PR do?
- Adds typing
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 15:45:00 | 10-12-2022 15:45:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,532 | closed | Build Push CI images also in a daily basis | # What does this PR do?
PR #19170 separated the images for push CI and daily CI. However, it only build the push CI images (that with the postfix `-push-ci`) when changes in `setup.py` are detected.
**We should also re-build the push CI images in a daily basis**, as 3rd party libraries might have newer versions, like `datasets`, `tokenizers` etc. | 10-12-2022 15:33:08 | 10-12-2022 15:33:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,531 | closed | Make `MobileBert` tokenizers independent from `Bert` | # What does this PR do?
Copied the code from `Bert` tokenizers into `MobileBert` tokenizers to make the latter self-contained.
Fixes #19303
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 10-12-2022 14:34:07 | 10-12-2022 14:34:07 | |
transformers | 19,530 | closed | Update README.md | Fixed a grammatical error.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 13:45:03 | 10-12-2022 13:45:03 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19530). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,529 | closed | Use memory efficient attention in CLIP | # What does this PR do?
This PR adds support for clip model to use memory efficient attention, similar to https://github.com/huggingface/diffusers/pull/532
~This is critical when you want to use stable diffusion with large batch size. Because when unet is using memory efficient attention, the bottleneck becomes the initial text encoding step.~
edit: I just tested to see what's the batch size that it becomes the bottleneck, but it seems that's not the bottleneck now. I'm not sure now. 🤔
Anyways. With this improvement, I'm able to run stable diffusion with batch size 128 on RTX 3090, in here [cccntu/accelerated-stable-diffusion](https://github.com/cccntu/accelerated-stable-diffusion)
@patrickvonplaten
| 10-12-2022 13:38:12 | 10-12-2022 13:38:12 | Is https://github.com/huggingface/diffusers/pull/532 for CLIP in Stable Diffusion? Is it necessary to port the implementation to `transformers` **CLIP** here to make running SD more efficient ...? @patrickvonplaten is the best one to make the decision.<|||||>Looks like this would be a great use of [custom modeling code](https://huggingface.co/docs/transformers/custom_models#writing-a-custom-model) instead of trying to change the model code.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Note that in `transformers` we don't want to necessarily support all the optimizations natively in the core library - also because the fundamental design is different (modeling code is copied rather than abstracted like in `diffusers`).
Maybe also cc @michaelbenayoun here <|||||>Hi @cccntu,
I agree with @patrickvonplaten here. Also, note that we have a library for optimizations called [Optimum](https://github.com/huggingface/optimum). Altough we do not support custom optimization for each modeling, I think that you might be interested in contributing optimizations such as the ones on your PR there!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,528 | closed | Allow TFBertTokenizer to use Tensorflow text BertTokenizer (and not FastBertTokenizer) to make it servable by TF Serving | ### Feature request
I would like to serve a bundle of Tokenizer + Model on TF Serving, but can't do it because TF Serving still have no support for TF FastBertTokenizer annd FastBertNormalize operations (https://github.com/tensorflow/serving/issues/2064).
It would be good if we could let [TFBertTokenizer ](https://github.com/huggingface/transformers/blob/4ed0fa3676ad8900eaa982a6c5c2ad6b75c8ea46/src/transformers/models/bert/tokenization_bert_tf.py) give the user an option not to use Tensorflow FastBertTokenizer when creating a TFBertTokenizer, so that it is servable on TFServing.
It would consist of moving (or creating an option to change) this
https://github.com/huggingface/transformers/blob/4ed0fa3676ad8900eaa982a6c5c2ad6b75c8ea46/src/transformers/models/bert/tokenization_bert_tf.py#L67-L69
To this:
```python
# to avoid naming collision with transformers BertTokenizer
from tensorflow_text import BertTokenizer as TFBertTokenizerLayer
lookup_table = tf.lookup.StaticVocabularyTable(
tf.lookup.KeyValueTensorInitializer(
keys=vocab_list,
key_dtype=tf.string,
values=tf.range(
tf.size(vocab_list, out_type=tf.int64), dtype=tf.int64),
value_dtype=tf.int64
),
num_oov_buckets=1
)
self.tf_tokenizer = TFBertTokenizerLayer(
lookup_table, token_out_type=tf.int64, lower_case=do_lower_case
)
```
### Motivation
I would like to serve a bundle of Tokenizer + Model on TF Serving, but can't do it because TF Serving still have no support for TF FastBertTokenizer annd FastBertNormalize operations (https://github.com/tensorflow/serving/issues/2064).
As this lib is much faster to solve this kind of thing than TF Serving, I thought it was worth it trying to solve it from here.
### Your contribution
I can definitely submit a PR with that if you approve the idea.
EDIT: I've created https://github.com/huggingface/transformers/pull/19590 to showcase the idea. | 10-12-2022 12:59:23 | 10-12-2022 12:59:23 | |
transformers | 19,527 | closed | [Whisper] Freeze params of encoder | # What does this PR do?
Adds a method to Whisper to freeze the parameters of the encoder and two associated tests.
API:
```python
whisper_model.freeze_encoder()
```
This is in-line with Wav2Vec2 where we freeze the feature encoder with the method [`.freeze_feature_encoder()`](https://github.com/huggingface/transformers/blob/bbd150e92f84db72e7507d0c3ce69474b2948839/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1220):
```python
wav2vec2_model.freeze_feature_encoder()
```
## Sanity check
```python
from transformers import WhisperConfig, WhisperForConditionalGeneration
config = WhisperConfig()
model = WhisperForConditionalGeneration(config)
# check if params are frozen
encoder_grads = [param.requires_grad for param in model.model.encoder.parameters()]
decoder_grads = [param.requires_grad for param in model.model.decoder.parameters()]
print("Before freezing encoder...")
print(f"All encoder params trainable: {all(encoder_grads)}")
print(f"All decoder params trainable: {all(decoder_grads)}")
# freeze params of encoder
model.freeze_encoder()
# check if params are frozen
encoder_grads = [param.requires_grad for param in model.model.encoder.parameters()]
decoder_grads = [param.requires_grad for param in model.model.decoder.parameters()]
print("After freezing encoder...")
print(f"All encoder params trainable: {all(encoder_grads)}")
print(f"All decoder params trainable: {all(decoder_grads)}")
```
```
Before freezing encoder...
All encoder params trainable: True
All decoder params trainable: True
After freezing encoder...
All encoder params trainable: False
All decoder params trainable: True
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 12:53:46 | 10-12-2022 12:53:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,526 | closed | Fix MarkupLMProcessor option flag in MarkupLMProcessor documentation | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a mistake in the docs for MarkupLMProcessor for use case 5. Currently the heading says `apply_ocr=False`, I believe this should be `parse_html=False`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
cc @NielsRogge @SaulLu @sgugger
| 10-12-2022 12:53:15 | 10-12-2022 12:53:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,525 | closed | Added onnx config whisper | # What does this PR do?
Fixes # (issue)
This PR adds onnx config and helper functions for export to onnx via optimum and transformers.onnx
| 10-12-2022 12:16:58 | 10-12-2022 12:16:58 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@lewtun @echarlaix The bug is fixed now. The incorrect results were because the export was happening with seqlength 1 due to typo in onnx config generate_dummy_input function.<|||||>Thanks for the review @sgugger and catching those last issues - I've checked the latest changes and think this can now be merged @mht-sharma <|||||>Hello,
I tried to generate onnx model using [docs](https://huggingface.co/docs/transformers/serialization#configuration-based-approach),
To inference, I passed audio features and decoder_input_id , and the output was two array with the shape of (1, 2, 768), (1, 1500, 768). Could you please help me how should I use these outputs for generating transcription?
Thank you.<|||||>Hi @zara0m please follow the example in this PR for export and inference using ONNX model. https://github.com/huggingface/optimum/pull/420<|||||>> Hi @zara0m please follow the example in this PR for export and inference using ONNX model. [huggingface/optimum#420](https://github.com/huggingface/optimum/pull/420)
Thank you very much for your help and quick response!
I tested it with base and small model for some non-english audios, but the outputs were not similar to whisper model, or maybe it translates instead of transcribing, how can I fix this?
Also, is there any way that I can have begin/end time of each sentence? (like the transcribe function of whisper model)
Thank you. |
transformers | 19,524 | closed | Bart configuration update | Update to bart config.

Tests passed

| 10-12-2022 11:47:04 | 10-12-2022 11:47:04 | Hey @ydshieh! It turned out that copy-pasting the "(with random weights)" text was causing the check_code_quality test failure. But unfortunately I don't know what is causing this ConnectionResetError: [Errno 104] while building PR Documentation<|||||>> Hey @ydshieh! It turned out that copy-pasting the "(with random weights)" text was causing the check_code_quality test failure.
Glad it works now 💯 Thank you!
> But unfortunately I don't know what is causing this ConnectionResetError: [Errno 104] while building PR Documentation
You can ignore this :-)
<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,523 | closed | [X-CLIP] Fix doc tests | # What does this PR do?
Fixes #19513
This PR fixes X-CLIP's AutoProcessor and adds doc tests. | 10-12-2022 11:39:15 | 10-12-2022 11:39:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,522 | closed | Update configuration_bart.py | Updated bart config in accordance with this issue:

Tests passed

| 10-12-2022 11:27:29 | 10-12-2022 11:27:29 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,521 | closed | [Whisper] Don't return attention mask in feat extractor | # What does this PR do?
Whisper pads all audio inputs to a fixed max length (=30s) by appending the silence token (zero) to the end of any sequences shorter than the max length. Hence, the model does **not** use an attention mask: all inputs have length 30s, padding is treated through use of the silence token rather than an attention mask.
This PR sets the default value of `return_attention_mask` to `False` in the feature extractor. In doing so, feature extractor methods such as `__call__` and `pad` will **not** return an `attention_mask` by default.
This behaviour can be overridden by passing the arg `return_attention_mask=True` to these methods.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 11:02:12 | 10-12-2022 11:02:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,520 | closed | Remove bert fast dependency from electra | # What does this PR do?
- Related to #19303
- Removed `Bert` fast dependency from `Electra` code base.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 10-12-2022 10:31:15 | 10-12-2022 10:31:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,519 | closed | [Examples] Generalise Seq2Seq ASR to handle Whisper | # What does this PR do?
Generalises `run_speech_recognition_seq2seq.py` to handle Whisper.
To train the "tiny.en" model on LibriSpeech dummy:
<details>
<summary> Bash script </summary>
```
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_seq2seq.py \
--dataset_name="hf-internal-testing/librispeech_asr_dummy" \
--model_name_or_path="openai/whisper-tiny.en" \
--dataset_config_name="clean" \
--train_split_name="validation" \
--eval_split_name="validation" \
--output_dir="./" \
--preprocessing_num_workers="1" \
--length_column_name="input_length" \
--overwrite_output_dir \
--num_train_epochs="1" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="8" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="text" \
--save_strategy="no" \
--evaluation_strategy="epoch" \
--logging_steps="10" \
--save_total_limit="1" \
--generation_max_length="40" \
--generation_num_beams="1" \
--fp16 \
--gradient_checkpointing \
--group_by_length \
--predict_with_generate \
--do_train --do_eval \
--do_lower_case
```
</details>
To train the "medium.en" model on LibriSpeech 960h:
<details>
<summary> Bash script </summary>
```
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_seq2seq.py \
--model_name_or_path="openai/whisper-medium.en" \
--dataset_name="librispeech_asr" \
--dataset_config_name="all" \
--train_split_name="train.clean.100+train.clean.360+train.other.500" \
--eval_split_name="validation.clean" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-librispeech" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="1000" \
--save_strategy="steps" \
--save_steps="1000" \
--generation_max_length="224" \
--generation_num_beams="1" \
--length_column_name="input_length" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--predict_with_generate \
--use_auth_token
```
</details>
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 10:27:24 | 10-12-2022 10:27:24 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19519). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19519). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19519). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger this one's ready to go! Just an FYI in-case you wanted to take a look :)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19519). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,518 | closed | Fix whisper doc | # What does this PR do?
Fixes the whisper doc of the forward pass.
| 10-12-2022 09:59:01 | 10-12-2022 09:59:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,517 | closed | Update configuration_bart.py | Update following #19487 issue

Tests passed | 10-12-2022 09:57:18 | 10-12-2022 09:57:18 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19517). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,516 | closed | Fix bugs when LayoutLMv3Tokenizer use model microsoft/layoutlmv3-base-chinese | # What does this PR do?
`microsoft/layoutlmv3-base-chinese` **use `sentencepiece` tokenizer** file "sentencepiece.bpe.model" from XLMRoberta instead of "tokenizer.json", when `LayoutLMv3Tokenizer` load this model with `LayoutLMv3Tokenizer.from_pretrained("microsoft/layoutlmv3-base-chinese")`, it will raise exception as follow:
```
[/usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv3/tokenization_layoutlmv3.py](https://localhost:8080/#) in __init__(self, vocab_file, merges_file, errors, bos_token, eos_token, sep_token, cls_token, unk_token, pad_token, mask_token, add_prefix_space, cls_token_box, sep_token_box, pad_token_box, pad_token_label, only_label_first_subword, **kwargs)
322 )
323
--> 324 with open(vocab_file, encoding="utf-8") as vocab_handle:
325 self.encoder = json.load(vocab_handle)
326 self.decoder = {v: k for k, v in self.encoder.items()}
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
I have tried to fix this bugs by merge code from `XLMRobertaTokenizer`.
And also, after `LayoutLMv3Tokenizer` changed, `LayoutLMv3Converter` should also be updated to convert to fast tokenizer.
I have write a new test file to test chinese tokenizer.
!!!NOTICE
I have found some differences when processing `chinese` because of `sentencepiece`.
For example, when we process english document, we have a bound box text `hello word` consist of words `["hello", "word"]`, we tokenize by the words `["hello", "word"]`, get token ids`[[42891], [14742]]`, this is absolutely ok.
But if we have a chinese bound box text `汇丰` consist of words `["汇", "丰"]`, we tokenize by words `["汇", "丰"]`, will get token ids `[[6, 47360], [6, 49222]]`, this is a little strange, **both token ids has `6`, token id `6` is a "▁" (U+2581) , but we do not input `▁`**. It is because **sententcepiece will add space at the begining**(refer [this](https://github.com/google/sentencepiece/issues/15))
So the import thing is that, **when use chinese tokenizer, do not use bound box words as input, use bound box text as input**, here is `["汇丰"]`
```python
from transformers import AutoTokenizer
eng_tok = AutoTokenizer.from_pretrained("roberta-base") # microsot/layoutlmv3-base refer roberta-base
eng_tok(["hello", "word"], add_special_tokens=False)["input_ids"] # [[42891], [14742]]
xlm_tok = AutoTokenizer.from_pretrained("xlm-roberta-base") # microsoft/layoutlmv3-base-chinse refer xlm-roberta-base
xlm_tok(["汇", "丰"], add_special_tokens=False)["input_ids"] # [[6, 47360], [6, 49222]]
xlm_tok = AutoTokenizer.from_pretrained("xlm-roberta-base")
xlm_tok(["汇丰"], add_special_tokens=False)["input_ids"] # [[6, 47360, 49222]]
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 09:10:14 | 10-12-2022 09:10:14 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi,
Could you try using [LayoutXLMTokenizer](https://huggingface.co/docs/transformers/model_doc/layoutxlm#transformers.LayoutXLMTokenizer)/[LayoutXLMTokenizerFast](https://huggingface.co/docs/transformers/model_doc/layoutxlm#transformers.LayoutXLMTokenizerFast) instead?
Normally, these should be compatible with microsoft/layoutlmv3-base-chinese.<|||||>> Hi,
>
> Could you try using [LayoutXLMTokenizer](https://huggingface.co/docs/transformers/model_doc/layoutxlm#transformers.LayoutXLMTokenizer)/[LayoutXLMTokenizerFast](https://huggingface.co/docs/transformers/model_doc/layoutxlm#transformers.LayoutXLMTokenizerFast) instead?
>
> Normally, these should be compatible with microsoft/layoutlmv3-base-chinese.
@NielsRogge sorry, I didn't find that `LayoutXLMTokenizer` is used for this model, maybe this will make code easier,but maybe more document is better. I will close this pr |
transformers | 19,515 | closed | There is a type annotation error | ### System Info
Transformer 4.22.0
Ubuntu 20.04
Python 3.10.6
### Who can help?
@sgugger, @patil-suraj
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The type annotation bug is [here](https://github.com/huggingface/transformers/blob/main/examples/research_projects/rag/finetune_rag.py#:~:text=model%3A%20GenerativeQAModule%20%3D%20GenerativeQAModule(args))
Trying to annotate `model` as type `GenerativeQAModule` at line 586 violates the rules of type annotation,
because `model` had been firstly initialized as `None` or other value at line 536.
```
#transformers/examples/research_projects/rag/finetune_rag.py #Line 536
def main(args=None, model=None) -> GenerativeQAModule:
...
if model is None:
model: GenerativeQAModule = GenerativeQAModule(args)
```
I found this defect by using the tool called [Pyre](https://pyre-check.org/docs/getting-started/).
`pyre init`
`pyre`
### Expected behavior
I expected no defect reported
| 10-12-2022 08:36:43 | 10-12-2022 08:36:43 | Thanks for reporting. Note that we do not maintain the examples lying in the research project folder, and we only use type annotations for documentation purpose, not for type-checkers.<|||||>@sgugger Will you have any comming project related to vision transformer that I can contribute to. <|||||>Can i get this issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,514 | closed | [Examples] Fix typos in run speech recognition seq2seq | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes minor typos in comments of `run_speech_recognition_seq2seq`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 08:33:48 | 10-12-2022 08:33:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,513 | closed | X-CLIP example error | ### System Info
google colab
python=3.7.14
transformers=4.23.1
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
example at <https://huggingface.co/docs/transformers/main/model_doc/xclip#transformers.XCLIPModel>
```python
from PIL import Image
import requests
from transformers import XCLIPProcessor, XCLIPModel
model = XCLIPModel.from_pretrained("microsoft/xclip-base-patch32")
processor = XCLIPProcessor.from_pretrained("microsoft/xclip-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
)
outputs = model(**inputs)
logits_per_video = outputs.logits_per_video # this is the video-text similarity score
probs = logits_per_video.softmax(dim=1) # we can take the softmax to get the label probabilities
```
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-3-bb99bb9a026f>](https://localhost:8080/#) in <module>
10
11 inputs = processor(
---> 12 text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True
13 )
14
3 frames
[/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
2776 return_length=return_length,
2777 verbose=verbose,
-> 2778 **kwargs,
2779 )
2780
TypeError: _batch_encode_plus() got an unexpected keyword argument 'images'
```
- change `images` -> `videos`
```python
from PIL import Image
import requests
from transformers import XCLIPProcessor, XCLIPModel
model = XCLIPModel.from_pretrained("microsoft/xclip-base-patch32")
processor = XCLIPProcessor.from_pretrained("microsoft/xclip-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# change images -> videos
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], videos=image, return_tensors="pt", padding=True
)
outputs = model(**inputs)
logits_per_video = outputs.logits_per_video # this is the video-text similarity score
probs = logits_per_video.softmax(dim=1) # we can take the softmax to get the label probabilities
```
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-4-8c627970ac41>](https://localhost:8080/#) in <module>
11 # change images -> videos
12 inputs = processor(
---> 13 text=["a photo of a cat", "a photo of a dog"], videos=image, return_tensors="pt", padding=True
14 )
15
1 frames
[/usr/local/lib/python3.7/dist-packages/transformers/models/videomae/feature_extraction_videomae.py](https://localhost:8080/#) in __call__(self, videos, return_tensors, **kwargs)
147 if not valid_videos:
148 raise ValueError(
--> 149 "Videos must of type `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]` (single"
150 " example), `List[List[PIL.Image.Image]]`, `List[List[np.ndarray]]`, `List[List[torch.Tensor]]` (batch"
151 " of examples)."
ValueError: Videos must of type `List[PIL.Image.Image]`, `List[np.ndarray]`, `List[torch.Tensor]` (single example), `List[List[PIL.Image.Image]]`, `List[List[np.ndarray]]`, `List[List[torch.Tensor]]` (batch of examples).
```
- change `videos=image` -> `videos=[image]`
```python
from PIL import Image
import requests
from transformers import XCLIPProcessor, XCLIPModel
model = XCLIPModel.from_pretrained("microsoft/xclip-base-patch32")
processor = XCLIPProcessor.from_pretrained("microsoft/xclip-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# change videos=image -> videos=[image]
inputs = processor(
text=["a photo of a cat", "a photo of a dog"], videos=[image], return_tensors="pt", padding=True
)
outputs = model(**inputs)
logits_per_video = outputs.logits_per_video # this is the video-text similarity score
probs = logits_per_video.softmax(dim=1) # we can take the softmax to get the label probabilities
```
```python
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-5-cbcb07e98104>](https://localhost:8080/#) in <module>
14 )
15
---> 16 outputs = model(**inputs)
17 logits_per_video = outputs.logits_per_video # this is the video-text similarity score
18 probs = logits_per_video.softmax(dim=1) # we can take the softmax to get the label probabilities
7 frames
[/usr/local/lib/python3.7/dist-packages/transformers/models/x_clip/modeling_x_clip.py](https://localhost:8080/#) in forward(self, hidden_states, attention_mask, causal_attention_mask, output_attentions)
435 batch_size = batch_time // self.num_frames
436 msg_token = self.message_fc(hidden_states[:, 0, :])
--> 437 msg_token = msg_token.view(batch_size, self.num_frames, hidden_size)
438
439 msg_token = msg_token + self.drop_path(self.message_attn(self.message_ln(msg_token))[0])
RuntimeError: shape '[0, 8, 768]' is invalid for input of size 768
```
[colab](https://colab.research.google.com/drive/1Qq8qTx1SWsdEE4PC3h6Guta8lrRQjVLk?usp=sharing)
### Expected behavior
run... | 10-12-2022 08:16:33 | 10-12-2022 08:16:33 | |
transformers | 19,512 | closed | [FLAX] Whisper | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 08:13:33 | 10-12-2022 08:13:33 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19512). All of your documentation changes will be reflected on that endpoint.<|||||>Hi,
I need little clarification about implementing the `FlaxWhisperDecoder` Module.
What would be the best way to pass `past_key_values_length` to the module?
Reference in Pytorch implementation.
https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/modeling_whisper.py#L863-L873
@patrickvonplaten @ydshieh @patil-suraj <|||||>Whisper on TPU will make :fire: colab demos<|||||>
<|||||>Awesome work here! Feel free to ping me for a review once it is ready 😄 <|||||>Hi,
I have finished the model and working on the test cases now.
The pt<->flax equivalence test is failing, even though the `model.generate` produce the exact speech-to-text like the PyTorch model.

I have attached steps to reproduce the issue in this notebook - https://colab.research.google.com/drive/1KmO8OBUpHfs1uYA_eSwamQAXnjsdbkRS?usp=sharing
Any pointers will be helpful.
Thanks
@patrickvonplaten @patil-suraj @ydshieh @ArthurZucker <|||||>Hi @kamalkraj First, thank you for this awesome PR!
Regarding the PT/Flax tests, I probably need to improve that PT/Flax equivalence tests to make it (a bit) easier to find out which layers gives the larger difference.
In the meantime, I have to say there is no easy way to debug such issue. We need patience to find out at which layer(s) we have the first large difference (greater than the tolerance) and see what's wrong inside that layer.
This is usually tedious and involving manually debugging process.
Anyway, I can open a PR to make the process (a bit) easier - if you want to wait a bit. But notice that we still need similar process even that PR is merged.<|||||>Will try to get #18420 merged so that we can maybe use the `find_pt_fx_differences(pt_outputs, fx_outputs)` function! But in the mean time, you should set `output_hidden_states=True` and check where the lists differ 🤗 <|||||>Hi @kamalkraj Actually that test is quite good enough, but we need to change a bit to debug more.
The last 2 commit in this [branch](https://github.com/huggingface/transformers/commits/temp-debug-whisper-flax) could log more information.
If you run the tests like
```bash
RUN_PT_FLAX_CROSS_TESTS=true python3 -m pytest -v tests/models/whisper/test_modeling_flax_whisper.py -k "test_equivalence_pt_to_flax"
```
it logs something
```
max diff. in outputs.logits: 0.0020506680011749268
```
but it doesn't fail the test -> it continues. So far, I got
```bash
E AssertionError: <class 'list'> != <class 'tuple'> : outputs.decoder_hidden_states: Output types differ between Flax and PyTorch
```
so you will have to look the output type of `decoder_hidden_states` and make sure the type is the same as the PyTorch one.
Continue this process will eventually show you all the difference, and you can get a better idea where to debug in the modeling code.
Also, it seems when running the tests from `tests/models/whisper/test_modeling_whisper.py`, we have some shape issue. This is another thing to debug.
Hopefully this gives you some idea of how we can debug here 🤗
<|||||>Thanks, @ydshieh and @ArthurZucker
<|||||>To make for a more consistent API across models, couldn't we swap out `past_key_values_length` and instead compute `position_ids` to get the current positional embeddings for the decoder? It feels like this would make it easier to fit Whisper in with other finetuning codebases (no need to create custom logic for computing `past_key_values_length` when dealing with Whisper). As the code currently stands, I think it would actually give incorrect outputs when decoding a batch when each element of the batch has different decoder prefix/prompt tokens. Computing position ids from the attention mask would also allow for either left or right padding.
I have another Flax Whisper implementation with .from_pretrained(..., from_pt=True) working correctly and it giving correct outputs for variable length prompts that I'd be happy to share (or create a separate PR for). It also adds some stuff to the generation utilities to support prompt tokens to the decoder that already exist in the PyTorch utilities (using prompt tokens instead of `model.config.decoder_start_token_id` if specified).<|||||>I haven't look into this. But @andyehrenberg do you suggest a different way of computation in Flax Whisper than the one implemented in our PyTorch/TensorFlow Whisper?
It's also better for @kamalkraj to express if he would like to continue this PR before we go ahead.<|||||>@ydshieh @andyehrenberg
If there is already a working implementation, please continue.
I am closing this one.
Thanks<|||||>@ydshieh I guess what I'm suggesting for this could also be helpful for the PyTorch/TF implementations to improve flexibility/compatibility with existing codebases that use `position_ids` for other models (such as when finetuning).
For example, the use-case I'm working on is fine-tuning Whisper with RL (trying to expose it to its own outputs to reduce hallucinations). At each step when collecting rollouts, it is given a batch of audio features and decoder prompts (from previous audio snippets) - these prompts are of varying lengths, so padding/attention masks are needed, and the position embeddings need to adjust accordingly. And then when doing PPO updates on these steps, the position embeddings need to be computed correctly based off of which timesteps (tokens) are padding.
The implementation in this PR wouldn't accommodate this scenario as it assumes the same `past_key_values_length` for each sequence in the batch, whereas the implementation I've worked on uses `position_ids` to keep track of where we are in each sequence of the batch. Earlier I had use a different method that only used the attention mask along with another caching method in the decoder, but using position_ids is much simpler and accommodates multiple padding schemes more simply. |
transformers | 19,511 | open | ERNIE and tensorflow2 | ### Feature request
Checked the transformersss Explanatory document , I found that only ErnieModel that supports torch exists. Is there any plan to release TFErnieModel later?
### Motivation
I am using the tensoflow2 framework and want use ERNIE.
### Your contribution
sorry. | 10-12-2022 07:01:30 | 10-12-2022 07:01:30 | cc @amyeroberts <|||||>Hi @Smile-L-up. Thanks for raising this issue.
You can check if there's any ongoing work to port a model to TensorFlow, by searching the [open issues](https://github.com/huggingface/transformers/issues?q=is%3Aissue+is%3Aopen+ernie) and [PRs](https://github.com/huggingface/transformers/pulls?q=is%3Apr+is%3Aopen+ernie). I don't think there's any plans or ongoing work to port ERNIE.
If you're interested in adding the model yourself, we have a great guide showing all the steps [here](https://huggingface.co/docs/transformers/v4.23.1/en/add_tensorflow_model) and we're of course happy to help along the way. |
transformers | 19,510 | closed | RFC: Add quantization capability to the Transformers Trainer API | ### Feature request
Add quantization capability to the Transformers Trainer API
### Motivation
Quantization is one of popular model compression technologies and widely used in the industry. At Intel(R) Xeon platforms and Nvidia GPU platforms it could bring significant performance speedup with slight accuray loss. Having a easy-of-use quantization interface in Transfomers will benifit customers.
Comparing with vanilla PyTorch quantization feature, the proposed interface supports accuracy-aware tuning capability to solve common accuracy loss issue when applying quantization technology.
**Design**
The proposed interface is like below:
```
class AccuracyAwareTuningConf:
def __init__(self, accuracy_criterion='relative', accuracy_loss=0.01, metric_name='F1', timeout=0):
# The tuning configuration used to define accuracy goal.
# Args:
# accuracy_criterion: String. relative loss or absolute loss.
# accuracy_loss: Float. The tolerated accuracy loss value.
# metric_name: String. The metric user cares about.
# timeout: Integer. 0 means early stop. non-zero means returns within defined time scope. unit is minute.
class Trainer:
...
def quantize(self, model=None, approach='static', calib_dataset=None, tuning_config=None)
# The interface used to quantize model
#
# Args:
# model: Optional. if None, the model in Trainer initialization will be used.
# approach: String. "auto", "static" and "dynamic" are three supported quantization approaches.
# calib_dataset: Optional. if None, the train_dataset in Trainer initialization will be used.
# tuning_config: Optional. if none, it means doing quantization without tuning.
```
Note this interface is device agnostic. user could specify the device running calibration and quantization by training_args.device.
**Use Case**
Take Transformers Text-Classification task as an example, user could add below code to quantize model.
***Quantization without tuning***
```
# Initialize our Trainer [original code]
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=data_collator,
)
# one line code change to do quantization without accuracy-aware tuning
q_model = trainer.quantize()
```
***Quantization with tuning***
```
# Initialize our Trainer [original code]
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset if training_args.do_train else None,
eval_dataset=eval_dataset if training_args.do_eval else None,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
data_collator=data_collator,
)
# two line code changes to do quantization with accuracy-aware tuning
conf = AccuracyAwareTuningConf(accuracy_loss=0.005, metric='F1')
q_model = trainer.quantize(tuning_conf=conf)
```
**Performance**
below are the accuracy and performance data of some quantized NLP models tuned on Intel(R) Xeon(R) Platinum 8380H CPU @ 2.90GHz with 4 cores per instance, batch size 1.
<table class="tg">
<thead>
<tr>
<th class="tg-9wq8" rowspan="3"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">Model</span> </th>
<th class="tg-9wq8" colspan="3" rowspan="2"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">Accuracy</span> </th>
<th class="tg-9wq8" colspan="3" rowspan="2"> <br> <span style="font-weight:bold;color:#24292F">Throughput (samples/sec)</span> </th>
</tr>
<tr>
</tr>
<tr>
<th class="tg-9wq8"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">INT8</span> </th>
<th class="tg-9wq8"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">FP32</span> </th>
<th class="tg-9wq8"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">Acc Ratio[(INT8-FP32)/FP32]</span> </th>
<th class="tg-9wq8"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">INT8</span> </th>
<th class="tg-9wq8"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">FP32</span> </th>
<th class="tg-9wq8"> <br><span style="font-weight:bold;font-style:normal;color:#24292F">Performance Ratio[INT8/FP32]</span> </th>
</tr>
</thead>
<tbody>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">Barthez MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">83.92%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">83.81%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">0.14%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">161.06</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">89.61</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.80x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">BERT base MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">89.90%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">90.69%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.88%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">244.27</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">125.28</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.95x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">BERT base RTE</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">69.31%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">69.68%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.52%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">259.21</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">125.72</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">2.06x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">BERT base SST2</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">91.06%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">91.86%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.87%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">262.73</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">125.69</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">2.09x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">BERT large MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">89.50%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">90.38%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.97%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">88.92</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">36.55</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">2.43x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">CamemBERT base MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">86.70%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">86.82%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.14%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">236.6</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">121.81</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.94x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">Deberta MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">90.88%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">90.91%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.04%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">149.76</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">84.72</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.77x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">DistilBERT base MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">88.23%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">89.16%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-1.05%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">426.4</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">246.13</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.73x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">mBart WNLI</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">56.34%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">56.34%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">0.00%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">66.23</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">30.86</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">2.15x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">lvwerra/pegasus-samsum</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">42.39</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">42.67</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">-0.67%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">3.86</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.14</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">3.38x</span> </td>
</tr>
<tr>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">Roberta Base MRPC</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">88.25%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">88.18%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">0.08%</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">245.05</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">123.53</span> </td>
<td class="tg-lboi"> <br><span style="font-weight:normal;font-style:normal;color:#24292F">1.98x</span> </td>
</tr>
</tbody>
</table>
### Your contribution
We are glad to contribute PR to HF community if this idea gets approved.
We would love to hear feedback from HF maintainer about this proposal. | 10-12-2022 05:52:31 | 10-12-2022 05:52:31 | cc @sgugger <|||||>This looks very exciting! The API suggested makes sense to me, I'd need to see the whole code to comment more :-)
By all means, please open a PR and tag me for review!<|||||>@ftian1 [Optimum](https://github.com/huggingface/optimum) has a quantization API which is device- and backend-agnostic. It also offers other kinds of optimization for accelerating inference so I think it makes more sense to keep all this in the same place. Do not hesitate to open a PR or create an issue there with your suggestions and we will be happy to discuss it!<|||||>@regisss @sgugger thanks for the comments. why we think it's valuable to contribute to transformers is because it would bring better user experience with fewer line code changes comparing with Optimum.
we can have a PR at first and then do further discussion with your guidance. thanks<|||||>@ftian1 While I agree with you that it would provide a better UX for some users, there are a couple of points that make me think about it twice:
- I do not think that such a quantization API should be tailored for accuracy-aware quantization only or for a specific backend. Users should be able to use ONNX Runtime, Torch FX, Intel Neural Compressor or any other available backend. The ways these backends work and are configured are different from each other, so the API should be able to manage this. Optimum enables to do it so a possible solution would be to have a wrapper around Optimum's quantization API.
- I believe that having different places in the Hugging Face ecosystem where users can perform quantization will create quite a lot of confusion and will draw them away from other cool Optimum's optimization features, making it more difficult to deploy fast optimized models.<|||||>@regisss thanks for the comments. I will invite Optimum owner @echarlaix to review this RFC and see what's her inputs.<|||||>Hi @ftian1,
We already support INC quantization aware training (as well as static and dynamic quantization) in `optimum` so it would be redundant to add this feature to `transformers` in my opinion and could create some confusion on which library to use to perform optimization as @regisss mentionned. As we already discussed, I think it makes sense to keep everything related to `neural-compressor` in `optimum` and to increase promotion around it. Also happy to discuss any modifications you would like us to apply on the `IncTrainer` ! (especially given our plan to refactorize the different `IncQuantizer` and `IncTrainer` classes after `neural-compressor` next big release)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,509 | open | INF encountered when using sampling with temperature. | ### System Info
latest transformers version == 4.24.0
When generating samples with mBART, I encounter this problem:

Looking deeply into the codes, I find the problem roots from the beam score added to next_token_scores here:
https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/generation_utils.py#L2566
The original value of beam_scores is 0, but when using temperature like 0.5, the score is also divided the temperature value in logit_warper and gets larger and larger. And finally it causes the overflow of next_token_scores.
### Who can help?
@patrickvonplaten @Narsil @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
**I provide a simple code that can reproduce this issue.**
import transformers
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
model = model.cuda()
src = 'In einem Notruf erzählte Professor Shannon Lamb mit einer etwas zittrigen Stimme der Polizei, dass er seine Freundin erschossen habe und dass die Beamten zu seinem Haus kommen müssten.'
encoded_hi = tokenizer(src, return_tensors="pt", padding=True).to('cuda') # do_sample=True
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id['en_XX'], temperature=0.5, do_sample=True, num_beams=10, num_return_sequences=10)
tgt_txt = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
### Expected behavior
I think this should be solved but I'm not sure about the effect of the beam_scores. | 10-12-2022 04:58:47 | 10-12-2022 04:58:47 | Hi @ElliottYan 👋 Thank you for pointing it out, it seems like a bug indeed. I will look into it.<|||||>Great! Looking forward to your solution.
For now, I just swap these two lines (L2566 && 2567) and the error disappears. But I'm not sure what I do is correct. <|||||>Are you using half or full precision here? Also `inf` values are not necessarily the reason for a bug, it might also be that `mBart` has some default logit processor settings that 0 out values which the lead to `inf` (cc @gante) |
transformers | 19,508 | closed | Fix fairseq wav2vec2-xls-r pretrained weights conversion scripts | # What does this PR do?
Fixes: #19319
This PR fixes some bug on wav2vec2 fairseq weights conversion scripts, that [wav2vec2-xls-r-kind](https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec/xlsr) weights files fail to be loaded on this line.
https://github.com/huggingface/transformers/blob/bc21aaca789f1a366c05e8b5e111632944886393/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py#L249
This can be resolved by specifying fairseq task as `audio_pretraining`, and loading fairseq weights with the task context.
This change follows the way of fairseq library, which loads pretrained model weights via passing `task` argument on cli.
Conversion of other non-finetuned weights works without any side effects. (tested with [wav2vec2-base](https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small.pt), [wav2vec2-conformer](dl.fbaipublicfiles.com/fairseq/conformer/wav2vec2/librilight/LL_relpos_PT_no_FT))
Referenced model weights are in the following url: https://github.com/facebookresearch/fairseq/tree/main/examples/wav2vec
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sanchit-gandhi
@patrickvonplaten
| 10-12-2022 03:56:19 | 10-12-2022 03:56:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,507 | closed | Create dependencies file | ### Feature request
### Motivation
### Your contribution
| 10-12-2022 03:05:54 | 10-12-2022 03:05:54 | |
transformers | 19,506 | closed | update doc for perf_train_cpu_many, add oneccl_bindings_for_pytorch 1.12.100 | Signed-off-by: Wang, Yi A <[email protected]>
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-12-2022 01:58:25 | 10-12-2022 01:58:25 | @sgugger @yao-matrix @liangan1 please have a review<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,505 | closed | Special Language Token for PLBART needs to be updated | ### System Info
The `FAIRSEQ_LANGUAGE_CODES` in PLBartTokenizer [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/plbart/tokenization_plbart.py#L90) need to be as follows.
```
FAIRSEQ_LANGUAGE_CODES = {
"base": ["__java__", "__python__", "__en_XX__"],
"multi": ["__java__", "__python__", "__en_XX__", "__javascript__", "__php__", "__ruby__", "__go__"],
}
```
The current PLBartTokenizer treats `java` as a special token, and thus it removes the token when decoding is performed. An example is given below.
```
code = "public void METHOD_1 ( TYPE_1 VAR_1 ) throws java.lang.Exception { super . METHOD_1 ( VAR_1 ) ; METHOD_2 ( VAR_1 ) ; }"
tokenizer = model_tokenizer_class.from_pretrained("uclanlp/plbart-base", language_codes="base")
model_inputs = tokenizer([code])
print(tokenizer.decode(model_inputs['input_ids'][0], skip_special_tokens=True, clean_up_tokenization_spaces=False))
# The code output is: "public void METHOD_1 ( TYPE_1 VAR_1 ) throws .lang.Exception { super . METHOD_1 ( VAR_1 ) ; METHOD_2 ( VAR_1 ) ; }"
```
### Who can help?
@gunjan
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
code = "public void METHOD_1 ( TYPE_1 VAR_1 ) throws java.lang.Exception { super . METHOD_1 ( VAR_1 ) ; METHOD_2 ( VAR_1 ) ; }"
tokenizer = model_tokenizer_class.from_pretrained("uclanlp/plbart-base", language_codes="base")
model_inputs = tokenizer([code])
print(tokenizer.decode(model_inputs['input_ids'][0], skip_special_tokens=True, clean_up_tokenization_spaces=False))
# public void METHOD_1 ( TYPE_1 VAR_1 ) throws .lang.Exception { super . METHOD_1 ( VAR_1 ) ; METHOD_2 ( VAR_1 ) ; }
```
### Expected behavior
```
code = "public void METHOD_1 ( TYPE_1 VAR_1 ) throws java.lang.Exception { super . METHOD_1 ( VAR_1 ) ; METHOD_2 ( VAR_1 ) ; }"
tokenizer = model_tokenizer_class.from_pretrained("uclanlp/plbart-base", language_codes="base")
model_inputs = tokenizer([code])
print(tokenizer.decode(model_inputs['input_ids'][0], skip_special_tokens=True, clean_up_tokenization_spaces=False))
# public void METHOD_1 ( TYPE_1 VAR_1 ) throws java.lang.Exception { super . METHOD_1 ( VAR_1 ) ; METHOD_2 ( VAR_1 ) ; }
``` | 10-12-2022 00:54:11 | 10-12-2022 00:54:11 | cc @gchhablani, could you take a look at this?<|||||>@LysandreJik I can have a look at this if it isn't being looked at.<|||||>Please go ahead, thank you @jordiclive!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,504 | closed | [Re-submit] Compute true loss Flax examples | Re-submit #18458.
cc @patrickvonplaten @sanchit-gandhi | 10-12-2022 00:22:38 | 10-12-2022 00:22:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,503 | closed | Create the arange tensor on device for enabling CUDA-Graph for Clip Encoder |
# What does this PR do?
This PR changes the allocation of a tensor at the [modeling_clip.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/clip/modeling_clip.py#L665) to happen at device side, to be able to use CUDA-Graph at DeepSpeed-Inference, which can help improve the performance for the Stable-Diffusion model inference. Here is the [PR ](https://github.com/microsoft/DeepSpeed/pull/2381)that includes the optimization for improving the SD performance.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten, @stas00 | 10-11-2022 23:40:55 | 10-11-2022 23:40:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,502 | closed | Add multi-node conditions in trainer_qa.py and trainer_seq2seq.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The QA example currently fails during evaluation when it is run on several nodes. This happens because seconday nodes are trying to write in `output_dir` while this directory only exists on the main node.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-11-2022 22:36:24 | 10-11-2022 22:36:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,501 | closed | Remove roberta dependency from longformer fast tokenizer | # What does this PR do?
This PR removes the RoBERTA fast tokenizer dependency from the Longformer fast tokenizer, as tasked in #19303.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-11-2022 22:26:06 | 10-11-2022 22:26:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks again for your contribution! |
transformers | 19,500 | closed | Misalignment between documentation and implementation of mBART50 tokenisation for the decoder | ### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.14
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.8.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@patil-suraj @SaulLu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The bug has been reproduced in the outputs of [this](https://colab.research.google.com/drive/1XUHZNKdxMLnV3AV8eZtKj7LyGNVL63Dy?usp=sharing) colab notebook. The following are the steps to be followed:
1. Make a copy of the notebook.
2. Execute the first 2 cells.
3. In the source file for mbart(`/usr/local/bin/python3.7/dist-packages/transformers/models/mbart/modeling_mbart.py`), on line 1352(above `outputs = self.model(...`, after the `if labels is not None` block), add `print(f'Decoder Input Ids: {decoder_input_ids}\nLabels: {labels}')`.
4. Restart the runtime for the changes in the library to take place.
5. Run the third cell. The output is:
```
Decoder Input Ids: tensor([[ 2, 250020, 47711, 7844, 127666, 8, 18347, 18147, 1362,
315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577]])
Labels: tensor([[250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315,
42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2]])
```
### Expected behavior
I was looking into fine-tuning `facebook/mbart-large-50` through [this](https://huggingface.co/docs/transformers/main/en/model_doc/mbart#training-of-mbart50) example in the documentation. As per the description, the expected input for the model is of the form `[lang_id] tokens [eos]` for both the encoder and the decoder.
While the `MBart50Tokenizer` produces outputs in the expected format, the `decoder_input_ids` get transformed to an incorrect one - `[eos] [lang_id] tokens`. Specifically, I believe the output should have been the following(do correct me if I am wrong here though):
```
Decoder Input Ids: tensor([[ 250020, 47711, 7844, 127666, 8, 18347, 18147, 1362,
315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2]])
Labels: tensor([[47711, 7844, 127666, 8, 18347, 18147, 1362, 315,
42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2, 250020]])
```
This is caused since the `shift_tokens_right` function does not seem to be adapted for mbart50. As per the docstring of this function,
> wrap the last non pad token (the [LID] token)
however, for an mbart50, the last non pad token would be an `eos`.
**Additional question:** Why should the `[eos]` token predict the `[lang_id]`? This happens in both mbart and mbart50. If not, should the last token in the labels be `-100`? If yes, there would be a subsequent issue, since the labels matrix from the tokenizer seems to be using `1` as the padding token instead of `-100`. Do let me know if I would be required to open the same!
If this bug seems legitimate, I would be glad to provide a fix for the same! I believe the `labels` key from MBart50Tokenizer would have to be updated to give the same output as the MBartTokenizer. | 10-11-2022 21:25:44 | 10-11-2022 21:25:44 | @ArthurZucker, when you have bandwidth, would you like to take a look at this?<|||||>Not stale, still looking forward to a response!<|||||>Hey! This is very similar to #18133.
First, I was not really able to reproduce the bug as the output of
```python
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
src_text = " UN Chief Says There Is No Military Solution in Syria"
tgt_text = "Şeful ONU declară că nu există o soluţie militară în Siria"
model_inputs = tokenizer(src_text, text_target=tgt_text, return_tensors="pt").input_ids
```
Gave :
```
tensor([[250004, 8274, 127873, 25916, 7, 8622, 2071, 438, 67485,
53, 187895, 23, 51712, 2]])
````
But in any case, I think that you are right, the documentation for both model is not aligned as the input is not shifted in the tokenizer but rather in the model. This was already mentioned so might as well adresse it !
<|||||>Hey! While the output of the tokenizer is correct(both input_ids and labels in the same format), the labels are going to pass through the `shift_tokens_right` to create the `decoder_input_ids`.
The `shift_tokens_right` expects the LID token at the end, however, `MBart50Tokenizer` will give an EOS token, therefore, the input to the decoder will end up being wrong.
Regarding the reproduction of the issue - do you mean reproducing the referenced issue? If yes, they are using the `MBartTokenizer`, while the code mentioned here uses the `MBart50Tokenizer`<|||||>I'll come back to this soon 😉 <|||||>It seems that this has been confusing a lot of people (including me, see #20610, ).
Let's work with your example:
- src_text : `en_XX UN Chief Says There Is No Military Solution in Syria</s>`
- labels : `ro_RO Şeful ONU declară că nu există o soluţie militară în Siria</s>`
- [shifted labels](https://github.com/huggingface/transformers/blob/main/src/transformers/models/mbart/modeling_mbart.py#L1348-L1349) : `</s>ro_RO Şeful ONU declară că nu există o soluţie militară în Siria` (= decoder_inputs_ids)
We are interested in supervised training where you feed the model with `inputs_ids` and `labels`. For most of the encoder decoder models, the labels are shifted to the right, so that the model will predict the next token in a MLM manner.
This means that if the `decoder_input_ids` are
```
[ 2, 250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577]
```
Then the model (if it is a perfect model) predicts
```
[250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2]
```
Which is then compared to the loss.
That is also why when you generate (inference) you force the beginning token with `</s>`.
<|||||>Thanks for the clarification, @ArthurZucker! It still seems a bit wrong to expect the model to predict `ro_RO` given `</s>`. The comment `wrap the last non pad token (the <LID> token)` in code is also somewhat confusing! <|||||>I agree with @LoicGrobol.
I also want to clarify this example from the [docs](https://huggingface.co/docs/transformers/model_doc/mbart#training-of-mbart50):
```python
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
article_hi = "संयुक्त राष्ट्र के प्रमुख का कहना है कि सीरिया में कोई सैन्य समाधान नहीं है"
article_ar = "الأمين العام للأمم المتحدة يقول إنه لا يوجد حل عسكري في سوريا."
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
# translate Hindi to French
tokenizer.src_lang = "hi_IN"
encoded_hi = tokenizer(article_hi, return_tensors="pt")
generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "Le chef de l 'ONU affirme qu 'il n 'y a pas de solution militaire en Syria."
# translate Arabic to English
tokenizer.src_lang = "ar_AR"
encoded_ar = tokenizer(article_ar, return_tensors="pt")
generated_tokens = model.generate(**encoded_ar, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
# => "The Secretary-General of the United Nations says there is no military solution in Syria."
```
> That is also why when you generate (inference) you force the beginning token with ```</s>```.
Is this the same ```forced_bos_token``` mentioned in the code? If yes, then should we force it be the ```<\s>``` token, rather than ```ro_RO``` as done in the code?<|||||>Okay, indeed @LoicGrobol we should not compute the loss on all the forced decoder ids. This seems to apply to a few models, so will open a PR to fix these and add some documentation to properly explain all of this.
@devaansh100, no we have the `bos_token_id # </s>` and the `forced_decoder_ids #<LID>` which should ensures that we start with 2 tokens.
Thanks both of you for your feedback. <|||||>Note that currently this training seems to be needed to work well with `generate` in e.g. mBART, which uses `</s> LANGID` as its forced prompt (loss on the LANGID could indeed be skipped though). I guess that would have to changed too but I don't know how to make that change work with existing pretrained models.<|||||>Not sure I understand why we need to change the generate? We should not
<|||||>The only logic that need to be updated is computing the loss on the lang token. The rest of the training procedure etc is still correct! It's just that we don't want the model to learn a distribution/update it's distribution when predicting the second token, because it can be variable at training, but will always be fixed at inference <|||||>Got it! I guess the only change then would be in the "labels" from the tokenizer then - where there was LANGID initially, we would have a 1/-100?<|||||>Yep! That should be it 😉 <|||||>Thank you for all the help!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,499 | closed | bart config changes | # What does this PR do?
PR for issue below:

Test passed:

| 10-11-2022 21:22:28 | 10-11-2022 21:22:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Not sure why it fails the check_code_quality test. Doing exactly what is written here: https://github.com/huggingface/transformers/pull/19485
<|||||>> Not sure why it fails the check_code_quality test. Doing exactly what is written here: #19485
Hi @imarekkus Could you try to run `make style` and see what happens?
Also, please do not change the `configuration_bert.py` in this PR, it is done in #19485, thank you 🙏 |
transformers | 19,498 | closed | Add a decorator for flaky tests | # What does this PR do?
This PR adds a new decorator to mark flaky tests, which will then automatically re-run them up to five times each time they fail (that param can be adapted). I've marked as flaky three tests I have seen recently fail for no reason as a demo. | 10-11-2022 21:01:24 | 10-11-2022 21:01:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,497 | closed | Update Whisper docs clarifying inference support for long-form decoding |
@ArthurZucker @patrickvonplaten
I have updated the [Whisper docs page](https://huggingface.co/docs/transformers/v4.23.1/en/model_doc/whisper#transformers.WhisperProcessor) to clarify that the current `decode()` implementation doesn't support long-form yet. Hope it'll save folks some time, vs digging in to find that out :)
| 10-11-2022 20:03:21 | 10-11-2022 20:03:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the addition 😉 |
transformers | 19,496 | closed | Avoid Push CI failing to report due to many commits being merged | # What does this PR do?
We have increasing commits merged recently, and when it happens in a very period of short time (4 merges in ~ 1min yesterday), we get an error when using `actions/checkout@v2` in the `workflow_run` event for **Push CI**
```bash
fatal: reference is not a tree: 5f5e264a12956bd7cce47dcb422b80ed68e4c24e
```
So this PR increases the fetch depth to 20, and hopefully we are safe with this number 😆
```
fetch-depth: 20
``` | 10-11-2022 19:53:09 | 10-11-2022 19:53:09 | @sgugger Probably it's good for L to know this change - your call :-)<|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,495 | closed | Bert config changes | # What does this PR do?
Fixes # (issue)
Fixed done in accordance with below issue

Tests passed

| 10-11-2022 19:25:52 | 10-11-2022 19:25:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19495). All of your documentation changes will be reflected on that endpoint. |
transformers | 19,494 | closed | Fix grad loss computation | In the flax summarization fine-tuning example, the loss is only computed where the decoder_attention_mask is 1. Different batches on different devices will have different decoder_attention_masks, and `jax.lax.pmean(loss, axis_name="batch")` doesn't take this into account, so it won't be equivalent to if the loss was computed on all batches put together on a single device. To fix this, this PR computes the number of tokens for which the loss was computed on each device, and multiply the per-device losses and gradients by these weights, then `lax.psum` the losses, gradients and weights before then dividing by the psummed weights.
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patil-suraj @sgugger | 10-11-2022 19:15:14 | 10-11-2022 19:15:14 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19494). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @andyehrenberg! This is a great spot! Indeed the loss isn't computed correctly. I've formalised an argument for this mathematically here (not rendering very well... easier viewed in markdown or LaTex):
<details>
<summary> Mathematical Proof </summary>
Technically speaking, in the `train_step`, the `pmap` won't compute a 'true' mean over devices. Here, what we're doing is computing a normalised loss on each device, and then averaging these losses over devices. This isn't strictly equal to summing the losses over all devices, and then dividing by the number of samples.
Let $K$ denote the number of devices. Denote the loss on the $i$-th device as $L_i$ (`loss.sum()`) and the number of samples $N_i$ (`label_mask.sum()`). In the `loss_fn`, we compute the normalised loss on each device (`loss.sum() / label_mask.sum()`):
$$\bar{L}_i = \frac{L_i}{N_i}$$
and then average over devices with the `pmap`:
$$\mathcal{L} = \frac{1}{K} \sum_{i=1}^{K} \frac{L_i}{N_i}$$
Whereas, for a 'true' loss, we should first add up all the losses over devices:
$$L_{tot} = \sum_{i=1}^{K} L_i $$
and then divide by the total number of labels:
$$\mathcal{L}' = \frac{L_{tot}}{N} = \frac{1}{N}\sum_{i=1}^{K} L_i $$
where $N$ is the total number of labels:
$$ N = \sum_{i=1}^{K} N_i $$
If we compare the two and ignore the constant $K$ in the `pmap` average:
$$\mathcal{L} = \sum_{i=1}^{K} \frac{L_i}{N_i}$$
$$ \mathcal{L}' = \frac{1}{N}\sum_{i=1}^{K} L_i $$
we see that the losses are in-fact different. The first expression is what you get if you average the losses on each device, then average these terms over devices with a `pmap`. The second expression is a 'true' loss, what you get by summing the losses on each device, summing these losses over devices, and then dividing by the total number of terms in your batch (= sum of the `label_mask` per device, summing these terms over devices).
</details>
A PR to address this was merged here: https://github.com/huggingface/transformers/pull/19504
Here, we compute the 'true' number of loss terms by summing the num labels on each device, and then normalising our loss by the sum of the labels over devices.
You should be able to rebase onto main to get these changes and compute a 'true' loss, both for summarisation and all the other Flax training examples 🤗
Closing this PR as https://github.com/huggingface/transformers/pull/19504 (merged) addresses this issue.
All the best with your Flax experiments! |
transformers | 19,493 | closed | Create ID3-Decision-Tree.py | ID3 Algorithm .Machine Learning.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-11-2022 16:52:56 | 10-11-2022 16:52:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19493). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,492 | closed | `python3` instead of `python` in Push CI setup job | # What does this PR do?
The `setup` job in push CI uses image `transformers-all-latest-gpu-push-ci`, which should use `python3` instead of `python`.
(I forgot this detail when working on #19054)
Currently, the setup job failed, and no test to run. | 10-11-2022 16:32:07 | 10-11-2022 16:32:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Actually, `python` is not found. I think we need to tweak a bit if we want to use `python` as `python3`. Do you want me to work on the docker image for this purpose?
```bash
/__w/_temp/5f33851e-ac7e-4126-b3d7-88092ae0b56d.sh: 2: python: not found
```<|||||>No, don't worry. I was just complaining about the action env, not your PR :-) |
transformers | 19,491 | closed | Dev build of TensorFlow causing issue with pre-trained BERT | ### System Info
Python version: 3.7
TF branch: dev
(this is part of our nightly CI checks for MLflow to test dev builds; sorry for not executing `transformers-cli env` for this report)
installed packages:
absl-py-1.2.0
astunparse-1.6.3
cachetools-5.2.0
flatbuffers-22.9.24
gast-0.4.0
google-auth-2.12.0
google-auth-oauthlib-0.4.6
google-pasta-0.2.0
grpcio-1.49.1
h5py-3.7.0
keras-nightly-2.11.0.dev2022101007
libclang-14.0.6
markdown-3.4.1
opt-einsum-3.3.0
protobuf-3.19.6
pyasn1-0.4.8
pyasn1-modules-0.2.8
rsa-4.9 tb-nightly-2.11.0a20221010
tensorboard-data-server-0.6.1
tensorboard-plugin-wit-1.8.1
tensorflow-io-gcs-filesystem-0.27.0
termcolor-2.0.1
tf-estimator-nightly-2.11.0.dev2022101008
tf-nightly-2.11.0.dev20221010 wrapt-1.14.1
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
simply import `transformers.models.bert`
Issue line: https://www.google.com/url?q=https://github.com/huggingface/transformers/blob/10100979ed0594d4cfe1982cdfac9642a68e473e/src/transformers/modeling_tf_utils.py%23L39&sa=D&source=docs&ust=1665453601638298&usg=AOvVaw1r381k-VA_PhIdIhALrmxc
The stack trace:
``` shell
self = <module 'transformers.models.bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__init__.py'>
module_name = 'modeling_tf_bert'
def _get_module(self, module_name: str):
try:
> return importlib.import_module("." + module_name, self.__name__)
module_name = 'modeling_tf_bert'
self = <module 'transformers.models.bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__init__.py'>
/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/utils/import_utils.py:1031:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = '.modeling_tf_bert', package = 'transformers.models.bert'
def import_module(name, package=None):
"""Import a module.
The 'package' argument is required when performing a relative import. It
specifies the package to use as the anchor point from which to resolve the
relative import to an absolute import.
"""
level = 0
if name.startswith('.'):
if not package:
msg = ("the 'package' argument is required to perform a relative "
"import for {!r}")
raise TypeError(msg.format(name))
for character in name:
if character != '.':
break
level += 1
> return _bootstrap._gcd_import(name[level:], package, level)
character = 'm'
level = 1
name = '.modeling_tf_bert'
package = 'transformers.models.bert'
/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/importlib/__init__.py:127:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = 'transformers.models.bert.modeling_tf_bert'
package = 'transformers.models.bert', level = 1
> ???
level = 1
name = 'transformers.models.bert.modeling_tf_bert'
package = 'transformers.models.bert'
<frozen importlib._bootstrap>:1006:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = 'transformers.models.bert.modeling_tf_bert'
import_ = <function _gcd_import at 0x7f8299e66b00>
> ???
import_ = <function _gcd_import at 0x7f8299e66b00>
module = <object object at 0x7f8299e4e060>
name = 'transformers.models.bert.modeling_tf_bert'
<frozen importlib._bootstrap>:983:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = 'transformers.models.bert.modeling_tf_bert'
import_ = <function _gcd_import at 0x7f8299e66b00>
> ???
import_ = <function _gcd_import at 0x7f8299e66b00>
name = 'transformers.models.bert.modeling_tf_bert'
parent = 'transformers.models.bert'
parent_module = <module 'transformers.models.bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__init__.py'>
path = ['/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert']
spec = ModuleSpec(name='transformers.models.bert.modeling_tf_bert', loader=<_frozen_importlib_external.SourceFileLoader objec...igin='/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py')
<frozen importlib._bootstrap>:967:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
spec = ModuleSpec(name='transformers.models.bert.modeling_tf_bert', loader=<_frozen_importlib_external.SourceFileLoader objec...igin='/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py')
> ???
module = <module 'transformers.models.bert.modeling_tf_bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py'>
spec = ModuleSpec(name='transformers.models.bert.modeling_tf_bert', loader=<_frozen_importlib_external.SourceFileLoader objec...igin='/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py')
<frozen importlib._bootstrap>:677:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_frozen_importlib_external.SourceFileLoader object at 0x7f8[208](https://github.com/mlflow/mlflow/actions/runs/3219669785/jobs/5266077564#step:12:209)0ff6d0>
module = <module 'transformers.models.bert.modeling_tf_bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py'>
> ???
code = <code object <module> at 0x7f82080b9ae0, file "/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py", line 16>
module = <module 'transformers.models.bert.modeling_tf_bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py'>
self = <_frozen_importlib_external.SourceFileLoader object at 0x7f82080ff6d0>
<frozen importlib._bootstrap_external>:728:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
f = <built-in function exec>
args = (<code object <module> at 0x7f82080b9ae0, file "/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tra...ngAndCrossAttentions': <class 'transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions'>, ...})
kwds = {}
> ???
args = (<code object <module> at 0x7f82080b9ae0, file "/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tra...ngAndCrossAttentions': <class 'transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions'>, ...})
f = <built-in function exec>
kwds = {}
<frozen importlib._bootstrap>:219:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
""" TF 2.0 BERT model."""
import math
import warnings
from dataclasses import dataclass
from typing import Dict, Optional, Tuple, Union
import numpy as np
import tensorflow as tf
from ...activations_tf import get_tf_activation
from ...modeling_tf_outputs import (
TFBaseModelOutputWithPastAndCrossAttentions,
TFBaseModelOutputWithPoolingAndCrossAttentions,
TFCausalLMOutputWithCrossAttentions,
TFMaskedLMOutput,
TFMultipleChoiceModelOutput,
TFNextSentencePredictorOutput,
TFQuestionAnsweringModelOutput,
TFSequenceClassifierOutput,
TFTokenClassifierOutput,
)
> from ...modeling_tf_utils import (
TFCausalLanguageModelingLoss,
TFMaskedLanguageModelingLoss,
TFModelInputType,
TFMultipleChoiceLoss,
TFNextSentencePredictionLoss,
TFPreTrainedModel,
TFQuestionAnsweringLoss,
TFSequenceClassificationLoss,
TFTokenClassificationLoss,
get_initializer,
keras_serializable,
unpack_inputs,
)
Dict = typing.Dict
Optional = typing.Optional
TFBaseModelOutputWithPastAndCrossAttentions = <class 'transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions'>
TFBaseModelOutputWithPoolingAndCrossAttentions = <class 'transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions'>
TFCausalLMOutputWithCrossAttentions = <class 'transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions'>
TFMaskedLMOutput = <class 'transformers.modeling_tf_outputs.TFMaskedLMOutput'>
TFMultipleChoiceModelOutput = <class 'transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput'>
TFNextSentencePredictorOutput = <class 'transformers.modeling_tf_outputs.TFNextSentencePredictorOutput'>
TFQuestionAnsweringModelOutput = <class 'transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput'>
TFSequenceClassifierOutput = <class 'transformers.modeling_tf_outputs.TFSequenceClassifierOutput'>
TFTokenClassifierOutput = <class 'transformers.modeling_tf_outputs.TFTokenClassifierOutput'>
Tuple = typing.Tuple
Union = typing.Union
__builtins__ = <builtins>
__cached__ = '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__pycache__/modeling_tf_bert.cpython-37.pyc'
__doc__ = ' TF 2.0 BERT model.'
__file__ = '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py'
__loader__ = <_frozen_importlib_external.SourceFileLoader object at 0x7f82080ff6d0>
__name__ = 'transformers.models.bert.modeling_tf_bert'
__package__ = 'transformers.models.bert'
__spec__ = ModuleSpec(name='transformers.models.bert.modeling_tf_bert', loader=<_frozen_importlib_external.SourceFileLoader objec...igin='/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py')
dataclass = <function dataclass at 0x7f8289b9cdd0>
get_tf_activation = <function get_tf_activation at 0x7f82080be680>
math = <module 'math' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/lib-dynload/math.cpython-37m-x86_64-linux-gnu.so'>
np = <module 'numpy' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/numpy/__init__.py'>
tf = <module 'tensorflow' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tensorflow/__init__.py'>
warnings = <module 'warnings' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/warnings.py'>
/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/modeling_tf_bert.py:38:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
"""TF general model utils."""
import functools
import gc
import inspect
import json
import os
import pickle
import re
import warnings
from collections.abc import Mapping
from pathlib import Path
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Union
import h5py
import numpy as np
import tensorflow as tf
from tensorflow.python.keras import backend as K
from tensorflow.python.keras.engine import data_adapter
from tensorflow.python.keras.engine.keras_tensor import KerasTensor
from tensorflow.python.keras.saving import hdf5_format
from huggingface_hub import Repository, list_repo_files
> from keras.saving.hdf5_format import save_attributes_to_hdf5_group
E ModuleNotFoundError: No module named 'keras.saving.hdf5_format'
Any = typing.Any
Callable = typing.Callable
Dict = typing.Dict
K = <module 'tensorflow.python.keras.backend' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tensorflow/python/keras/backend.py'>
KerasTensor = <class 'tensorflow.python.keras.engine.keras_tensor.KerasTensor'>
List = typing.List
Mapping = <class 'collections.abc.Mapping'>
Optional = typing.Optional
Path = <class 'pathlib.Path'>
Repository = <class 'huggingface_hub.repository.Repository'>
TYPE_CHECKING = False
Union = typing.Union
__builtins__ = <builtins>
__cached__ = '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/__pycache__/modeling_tf_utils.cpython-37.pyc'
__doc__ = 'TF general model utils.'
__file__ = '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/modeling_tf_utils.py'
__loader__ = <_frozen_importlib_external.SourceFileLoader object at 0x7f82080b6890>
__name__ = 'transformers.modeling_tf_utils'
__package__ = 'transformers'
__spec__ = ModuleSpec(name='transformers.modeling_tf_utils', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f82080b6890>, origin='/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/modeling_tf_utils.py')
data_adapter = <module 'tensorflow.python.keras.engine.data_adapter' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tensorflow/python/keras/engine/data_adapter.py'>
functools = <module 'functools' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/functools.py'>
gc = <module 'gc' (built-in)>
h5py = <module 'h5py' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/h5py/__init__.py'>
hdf5_format = <module 'tensorflow.python.keras.saving.hdf5_format' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py'>
inspect = <module 'inspect' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/inspect.py'>
json = <module 'json' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/json/__init__.py'>
list_repo_files = <bound method HfApi.list_repo_files of <huggingface_hub.hf_api.HfApi object at 0x7f8231bb3310>>
np = <module 'numpy' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/numpy/__init__.py'>
os = <module 'os' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/os.py'>
pickle = <module 'pickle' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/pickle.py'>
re = <module 're' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/re.py'>
tf = <module 'tensorflow' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/tensorflow/__init__.py'>
warnings = <module 'warnings' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/warnings.py'>
/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/modeling_tf_utils.py:39: ModuleNotFoundError
The above exception was the direct cause of the following exception:
@pytest.mark.skipif(
not (_is_importable("transformers") and keras_version >= Version("2.6.0")),
reason="This test requires transformers, which is no longer compatible with Keras < 2.6.0",
)
def test_pyfunc_serve_and_score_transformers():
> from transformers import BertConfig, TFBertModel # pylint: disable=import-error
tests/keras/test_keras_model_export.py:662:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
<frozen importlib._bootstrap>:1032: in _handle_fromlist
???
fromlist = ('BertConfig', 'TFBertModel')
import_ = <built-in function __import__>
module = <module 'transformers' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/__init__.py'>
recursive = False
x = 'TFBertModel'
/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/utils/import_utils.py:1022: in __getattr__
value = getattr(module, name)
module = <module 'transformers.models.bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__init__.py'>
name = 'TFBertModel'
self = <module 'transformers' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/__init__.py'>
/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/utils/import_utils.py:1021: in __getattr__
module = self._get_module(self._class_to_module[name])
name = 'TFBertModel'
self = <module 'transformers.models.bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__init__.py'>
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <module 'transformers.models.bert' from '/opt/hostedtoolcache/Python/3.7.12/x64/lib/python3.7/site-packages/transformers/models/bert/__init__.py'>
module_name = 'modeling_tf_bert'
def _get_module(self, module_name: str):
try:
return importlib.import_module("." + module_name, self.__name__)
except Exception as e:
raise RuntimeError(
f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its"
f" traceback):\n{e}"
> ) from e
E RuntimeError: Failed to import transformers.models.bert.modeling_tf_bert because of the following error (look up to see its traceback):
E No module named 'keras.saving.hdf5_format'
```
### Expected behavior
Changes made to Keras namespace (the addition of a `legacy` mode for serialization / deserialization) in this commit: https://github.com/keras-team/keras/commit/c06aa015e900a2029b5b379f374e5d4dc615fcbf will likely require an update for pre-trained huggingface models.
We wanted to make you aware of this if you hadn't already known about it. | 10-11-2022 15:18:09 | 10-11-2022 15:18:09 | Maybe of interest to @gante @Rocketknight1
<|||||>Hmm, this looks like they changed something about saving in H5 format. The bug report is appreciated, but since the API might be unstable, I think we probably won't change anything in `transformers` yet. However, if the bug still occurs in TF 2.11-rc0 then we definitely have a problem and will try to fix things before 2.11 final. Thank you!<|||||>> Hmm, this looks like they changed something about saving in H5 format. The bug report is appreciated, but since the API might be unstable, I think we probably won't change anything in `transformers` yet. However, if the bug still occurs in TF 2.11-rc0 then we definitely have a problem and will try to fix things before 2.11 final. Thank you!
Sounds great! I just wanted to give you a heads up and save you some debugging time for when the rc branch is cut :) <|||||>cc @gante and @ydshieh - this might be nothing, but we should remember to do some testing once the RC arrives.<|||||>Interesting 🤔 BTW, the problematic import (`save_attributes_to_hdf5_group`) is only used to save shards<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I see some TF release candidates out - @BenWilson2 have you tried them and encountered the same issue?<|||||>@Rocketknight1 they've been consistently failing with our CI testing (we pull nightlies and main branches).
Update on this:
TF 2.11 released today with these breaking changes.
Here's the stack trace we're getting on this release:
```
self = <module 'transformers' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/__init__.py'>
module_name = 'modeling_tf_utils'
def _get_module(self, module_name: str):
try:
> return importlib.import_module("." + module_name, self.__name__)
module_name = 'modeling_tf_utils'
self = <module 'transformers' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/__init__.py'>
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/utils/import_utils.py:1076:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = '.modeling_tf_utils', package = 'transformers'
def import_module(name, package=None):
"""Import a module.
The 'package' argument is required when performing a relative import. It
specifies the package to use as the anchor point from which to resolve the
relative import to an absolute import.
"""
level = 0
if name.startswith('.'):
if not package:
msg = ("the 'package' argument is required to perform a relative "
"import for {!r}")
raise TypeError(msg.format(name))
for character in name:
if character != '.':
break
level += 1
> return _bootstrap._gcd_import(name[level:], package, level)
character = 'm'
level = 1
name = '.modeling_tf_utils'
package = 'transformers'
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/importlib/__init__.py:127:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = 'transformers.modeling_tf_utils', package = 'transformers', level = 1
> ???
level = 1
name = 'transformers.modeling_tf_utils'
package = 'transformers'
<frozen importlib._bootstrap>:1014:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = 'transformers.modeling_tf_utils'
import_ = <function _gcd_import at 0x7f8c4fb15430>
> ???
import_ = <function _gcd_import at 0x7f8c4fb15430>
module = <object object at 0x7f8c4faec060>
name = 'transformers.modeling_tf_utils'
<frozen importlib._bootstrap>:[991](https://github.com/mlflow/mlflow/actions/runs/3500477115/jobs/5863188221#step:9:992):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
name = 'transformers.modeling_tf_utils'
import_ = <function _gcd_import at 0x7f8c4fb15430>
> ???
import_ = <function _gcd_import at 0x7f8c4fb15430>
name = 'transformers.modeling_tf_utils'
parent = 'transformers'
parent_module = <module 'transformers' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/__init__.py'>
path = ['/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers']
spec = ModuleSpec(name='transformers.modeling_tf_utils', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>, origin='/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py')
<frozen importlib._bootstrap>:975:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
spec = ModuleSpec(name='transformers.modeling_tf_utils', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>, origin='/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py')
> ???
module = <module 'transformers.modeling_tf_utils' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py'>
spec = ModuleSpec(name='transformers.modeling_tf_utils', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>, origin='/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py')
<frozen importlib._bootstrap>:671:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>
module = <module 'transformers.modeling_tf_utils' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py'>
> ???
code = <code object <module> at 0x7f8b21782030, file "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 16>
module = <module 'transformers.modeling_tf_utils' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py'>
self = <_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>
<frozen importlib._bootstrap_external>:843:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
f = <built-in function exec>
args = (<code object <module> at 0x7f8b21782030, file "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tra...d' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tensorflow/python/keras/backend.py'>, ...})
kwds = {}
> ???
args = (<code object <module> at 0x7f8b21782030, file "/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tra...d' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tensorflow/python/keras/backend.py'>, ...})
f = <built-in function exec>
kwds = {}
<frozen importlib._bootstrap>:219:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
"""TF general model utils."""
import functools
import gc
import inspect
import json
import os
import pickle
import re
import warnings
from collections.abc import Mapping
from pathlib import Path
from typing import TYPE_CHECKING, Any, Callable, Dict, List, Optional, Union
import h5py
import numpy as np
import tensorflow as tf
from tensorflow.python.keras import backend as K
from tensorflow.python.keras.engine import data_adapter
from tensorflow.python.keras.engine.keras_tensor import KerasTensor
from tensorflow.python.keras.saving import hdf5_format
from huggingface_hub import Repository, list_repo_files
> from keras.saving.hdf5_format import save_attributes_to_hdf5_group
E ModuleNotFoundError: No module named 'keras.saving.hdf5_format'
Any = typing.Any
Callable = typing.Callable
Dict = typing.Dict
K = <module 'tensorflow.python.keras.backend' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tensorflow/python/keras/backend.py'>
KerasTensor = <class 'tensorflow.python.keras.engine.keras_tensor.KerasTensor'>
List = typing.List
Mapping = <class 'collections.abc.Mapping'>
Optional = typing.Optional
Path = <class 'pathlib.Path'>
Repository = <class 'huggingface_hub.repository.Repository'>
TYPE_CHECKING = False
Union = typing.Union
__builtins__ = <builtins>
__cached__ = '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/__pycache__/modeling_tf_utils.cpython-38.pyc'
__doc__ = 'TF general model utils.'
__file__ = '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py'
__loader__ = <_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>
__name__ = 'transformers.modeling_tf_utils'
__package__ = 'transformers'
__spec__ = ModuleSpec(name='transformers.modeling_tf_utils', loader=<_frozen_importlib_external.SourceFileLoader object at 0x7f8b20c33d90>, origin='/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py')
data_adapter = <module 'tensorflow.python.keras.engine.data_adapter' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tensorflow/python/keras/engine/data_adapter.py'>
functools = <module 'functools' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/functools.py'>
gc = <module 'gc' (built-in)>
h5py = <module 'h5py' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/h5py/__init__.py'>
hdf5_format = <module 'tensorflow.python.keras.saving.hdf5_format' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tensorflow/python/keras/saving/hdf5_format.py'>
inspect = <module 'inspect' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/inspect.py'>
json = <module 'json' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/json/__init__.py'>
list_repo_files = <bound method HfApi.list_repo_files of <huggingface_hub.hf_api.HfApi object at 0x7f8b30547730>>
np = <module 'numpy' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/numpy/__init__.py'>
os = <module 'os' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/os.py'>
pickle = <module 'six.moves.cPickle' (<six._SixMetaPathImporter object at 0x7f8c4d2066d0>)>
re = <module 're' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/re.py'>
tf = <module 'tensorflow' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/tensorflow/__init__.py'>
warnings = <module 'warnings' from '/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/warnings.py'>
/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/transformers/modeling_tf_utils.py:39: ModuleNotFoundError
```<|||||>Looks like you're all set already to support and just need an import version check now that these changes are in the official release :)
Although it looks like the team has some additional serious breaking changes on the horizon: https://github.com/tensorflow/tensorflow/releases/tag/v2.11.0 <|||||>Hi @BenWilson2 👋
We have made recent changes related to the imports from Keras (https://github.com/huggingface/transformers/pull/20317) -- it should also solve this issue, correct? <|||||>Those changes look great and should completely fix the breaking issues that 2.11 introduced. Thank you for the very fast response to address this!
Is there a scheduled release for 4.24.1 coming up soon?<|||||>No, we won't make a patch release for this: it's not a regression from us, but breaking changes from TensorFlow. The next release of Transformers will be next week, probably on December 1st :-)<|||||>Awesome! (I wouldn't have expected a patch release for this; I was just curious, based on the history of releases this year, if you had another micro release queued anyway). Thank you for the timeline for the next minor release. We'll be sure to unblock users an unpin the version right after your next minor release.
Thanks again! :) |
transformers | 19,490 | closed | ASR pipeline does not work with openai/whisper on current master | ### System Info
transformers @ git+https://github.com/huggingface/transformers.git@b651efe59ea506d38173e3a60a4228e7e74719f9
python 3.6
Standard AWS Ubuntu Deep Learning AMI (Ubuntu 18.04) Version 30.0
### Who can help?
@Narsil @anton-l @sanchit-gandhi @patrickvonplaten
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
To reproduce run the following code, from asr pipeline example and whisper:
```python
from datasets import load_dataset
from transformers import pipeline
pipe = pipeline(model="openai/whisper-large")
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
output = pipe(ds[0]['file'], chunk_length_s=30, stride_length_s=(4, 2))
```
yields:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-13-efceed64cd5c> in <module>
----> 1 output = pipe(ds[0]['file'], chunk_length_s=30, stride_length_s=(4, 2))
~/venv38/lib/python3.8/site-packages/transformers/pipelines/automatic_speech_recognition.py in __call__(self, inputs, **kwargs)
181 `"".join(chunk["text"] for chunk in output["chunks"])`.
182 """
--> 183 return super().__call__(inputs, **kwargs)
184
185 def _sanitize_parameters(self, **kwargs):
~/venv38/lib/python3.8/site-packages/transformers/pipelines/base.py in __call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1072 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)
1073 else:
-> 1074 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
1075
1076 def run_multi(self, inputs, preprocess_params, forward_params, postprocess_params):
~/venv38/lib/python3.8/site-packages/transformers/pipelines/base.py in run_single(self, inputs, preprocess_params, forward_params, postprocess_params)
1093 def run_single(self, inputs, preprocess_params, forward_params, postprocess_params):
1094 all_outputs = []
-> 1095 for model_inputs in self.preprocess(inputs, **preprocess_params):
1096 model_outputs = self.forward(model_inputs, **forward_params)
1097 all_outputs.append(model_outputs)
~/venv38/lib/python3.8/site-packages/transformers/pipelines/automatic_speech_recognition.py in preprocess(self, inputs, chunk_length_s, stride_length_s)
260 # Currently chunking is not possible at this level for `seq2seq` so
261 # it's ok.
--> 262 align_to = self.model.config.inputs_to_logits_ratio
263 chunk_len = int(round(chunk_length_s * self.feature_extractor.sampling_rate / align_to) * align_to)
264 stride_left = int(round(stride_length_s[0] * self.feature_extractor.sampling_rate / align_to) * align_to)
~/venv38/lib/python3.8/site-packages/transformers/configuration_utils.py in __getattribute__(self, key)
252 if key != "attribute_map" and key in super().__getattribute__("attribute_map"):
253 key = super().__getattribute__("attribute_map")[key]
--> 254 return super().__getattribute__(key)
255
256 def __init__(self, **kwargs):
AttributeError: 'WhisperConfig' object has no attribute 'inputs_to_logits_ratio'
```
### Expected behavior
I would've expected to obtain the transcript in `output`. | 10-11-2022 13:20:52 | 10-11-2022 13:20:52 | cc @ArthurZucker <|||||>I will have a look! But chunking is not supported yet with whisper. (Should take care of it next week)
Normally a warning should pop instead of an error<|||||>@ArthurZucker are we sure `Whisper` can handle chunking ?
> Whisper is not a CTC model meaning that chunking as shown in Nico's [blog](https://huggingface.co/blog/asr-chunking) does not work.
from internal conversation.
Happy to jump into a design call to discuss whether we can do it or not.
Not being CTC means it's harder to handle the boundaries. Boundaries at silence are sort of OK, but unfortunately can never really a *complete* solution (because you can never be sure you're going to get a silence, and you MUST be able to handle chunking regardless).
This might be deemed acceptable in whisper btw, but when we checked for regular models, the regular silence detection was not good enough to be ran automatically (meaning you have to tune settings always to get decent silence results with most silence detectors)<|||||>Really sorry about my miss-communication. The chunking that will be supported is different from CTC. Let's organize a call to speak in more details about that 😉
The goal would be to be able to specify a chunk length and stride length (if people want to customize it) but default Whisper has its own parameters. Let's talk more about that when we call 🤗<|||||>Would also be interested to hear more about how chunking will be supported for the Whisper ASR pipeline! 😁
Related to this: is there a way to avoid the transcription being cut off too early with a HF ASR pipeline? It seems the ASR pipeline will only transcribe the 1st section if we have a longer audio file with silence in between. <|||||>Hi @CarloLepelaars
It's actually quite challenging to do chunking with whisper. The reason is that the suggested way by OpenAI needs to run the inference on the first 30s before being able to run inference on the next 30s starting at 30s - X. X depends on the output of the first run.
This violates an important property of a pipeline, which is that the generations shouldn't depend on each other (in order to enable batching).
That doesn't mean it's impossible, but the stiching back of actual predictions becomes hairy:
- We have no control where the timestamps are created, and we can't force them to appear within the strides.
- It also require an extremely custom `LogitsProcessor` to "force" timestamp tokens to appear.
For the audio being cut off, would you rather have an error being thrown ? Maybe @ArthurZucker has better ideas what we should do when the audio is too long.<|||||>Thanks @Narsil !
For long audio, we can just enable the chunking without `timestamp` prediction. Though the results won't be extremely good, I remember attempting this (ultra naive way, with no `stride` ) and it gave pretty decent outputs :
```python
"""
Je mappelle Claude. Je decoupe plouf. Let's just try it again. Je mappelle Claude. Je te plie mlu. Huh. It's not quite what I'm saying. Really? Sounds exactly the same to me. It does? Really? Yeah. All right, let's just try it again. Really listen.
Okay. Je mappelle Claude. Je te flou... flie. Oh, mon Dieu. Oh, de fouf. Je mappelle Claude. Je te call blue. No! Okay, maybe if we just break it down. Okay, let's just... let's try it one syllable at a time. Okay, so repeat after me. Je... Je... Ma... Ma... Pelle.
Great! Okay, faster. Je m'mappelle. Je m'mappelle. Me poo poo! It's too hard. I can't teach you. What are you doing? I have to go before I put your head through a wall. Don't go! Don't go! I need you! My audition is tomorrow! Jableau Blanc! Mille lapis! Au Blanc! Pou!
"""
```
For this clip : https://www.youtube.com/watch?v=H3dToD7_ATU , which seem like apart from maj, is extremely good!
<|||||>@ArthurZucker What about erroring out when the input audio is too long ?<|||||>I'd rather add a warning saying that the audio will be automatically cropped! WDYT? <|||||>IMO error is better here.
@Narsil if there is a chance to run Whisper's silence detection + chunking mechanism in a pipeline I think this would be very useful/impactful <|||||>> I will have a look! But chunking is not supported yet with whisper. (Should take care of it next week)
> Normally a warning should pop instead of an error
Has this been implemented? Where can I check the upgrades for when it is functional?
(I understand it is not an easy task, just wanted to make sure that I have the tools to find out about the implementation when it is available).<|||||>Hey, here is one of the PR: #20104<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Resolved in https://github.com/huggingface/transformers/pull/20104 |
transformers | 19,489 | closed | Try replacing tf.int32 with tf.int64 across all tests | This is a draft PR where I just search-replaced `tf.int32` with `tf.int64` in our tests to check what breaks. I'll probably also need to cast our dummy inputs correctly to make this work, at least! | 10-11-2022 13:10:35 | 10-11-2022 13:10:35 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19489). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Bumping this to keep it open - a great int dtype purge is still on my list, but I got sidetracked with some other high-priority stuff!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this because I think (I hope) that our dtype issues are mostly resolved by now |
transformers | 19,488 | closed | PreTrainedTokenizerBase issue produced by PR #19073 | ### System Info
- `transformers` version: 4.23.1
- Platform: Linux-5.4.0-125-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.13
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@saull
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Reproduction
Have a local `tokenizer.json` file (different than Hub's file and in the same folder as invoked code) and invoke the following code:
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-350M-mono")
print(tokenizer)
```
### Error
Depends on how the local `tokenizer.json` is filled with. It could be an error stating that both tokenizers are distinct in the number of tokens, etc. Nevertheless, here is the trace of what could be the issue:
#### transformers.tokenization_utils_base (Line 1726)
```
for file_id, file_path in vocab_files.items():
if file_path is None:
resolved_vocab_files[file_id] = None
elif os.path.isfile(file_path):
resolved_vocab_files[file_id] = file_path
...
```
If we print the `vocab_files` dictionary, most of the time its output will be as expected:
```
{'vocab_file': 'vocab.json', 'merges_file': 'merges.txt', 'tokenizer_file': 'tokenizer.json', 'added_tokens_file': 'added_tokens.json', 'special_tokens_map_file': 'special_tokens_map.json', 'tokenizer_config_file': 'tokenizer_config.json'}
```
With the added lines in PR #19073, there will be a time when the code will check if `tokenizer.json` is a file that exists in the system, and if it does, it will mark it as the `file_path` for the `resolved_vocab_files` dictionary. Unfortunately, this is not expected, because we need the `file_path` to come from the Hub's download (since we are loading a pre-trained tokenizer from a identifier that is found on Hub) and not from a local file.
If we print the `resolved_vocab_files` dictionary with the added lines from PR #19073, this is its output:
```
{... 'tokenizer_file': 'tokenizer.json' ...}
```
Without the added lines:
```
{... 'tokenizer_file': '/home/gderosa/.cache/huggingface/hub/models--Salesforce--codegen-350M-mono/snapshots/40b7a3b6e99e73bdb497a14b740e7167b3413c74/tokenizer.json' ...}
```
My assumption is that this very same behavior should occur if users have any local files defined by the `vocab_files` dictionary in the same folder as they are running their scripts.
### Solutions
Maybe the `cached_file` loading should become prior to the added lines? And if the cached version could not be found, it resorts to local files?
### Expected behavior
Expected behavior is to use the `tokenizer_file` from the `pretrained_model_name_or_path` instead of the local file. | 10-11-2022 12:33:50 | 10-11-2022 12:33:50 | Hi everyone! Hope everything is going well with you.
Please let me know if I could be clear enough in describing the issue.
Thanks for your attention and best regards,
Gustavo.<|||||>Thank you for the issue @gugarosa!
Pinging @sgugger <|||||>Thanks for the report. I understand the bug and your analysis seems correct for its cause. Will work on a fix as soon I have some free time (might be early next week only)!<|||||>Got time today actually, this should be fixed by the PR linked above!<|||||>Thanks so much for the prompt response @sgugger! |
transformers | 19,487 | open | 🔥[Community Event] Doc Tests Sprint - Configuration files🔥 | This sprint is similar to #16292 - but for model **configuration files**, i.e. `configuration_[model_name].py`.
For example, `src/transformers/models/bert/configuration_bert.py`
# The expected changes
The changes we expect could be find #19485:
1. **Change the import order of the model and configuration classes**
2. **Add `(with random weights)` in the comment before model initialization line**
3. **Add `configuration_[model_name].py` to `utils/documentation_tests.txt`** (respecting the order)
Please do step 3. only after **Running the doctest and make sure all tests pass** (see below) 🙏
# How to run doctests
Suppose you are working on `src/transformers/models/bert/configuration_bert.py`. The steps to run the test are:
0. **Stage your changes**
```bash
git add src/transformers/models/bert/configuration_bert.py
```
1. **Prepare the files to be tested**
```python
python utils/prepare_for_doc_test.py src
```
or if you prefer to be more specific
```python
python utils/prepare_for_doc_test.py src/transformers/models/bert/configuration_bert.py
```
This will change some files (doc-testing needs to add additional lines that we don't include in the doc source files).
2. **Launch the test:**
```python
python -m pytest --doctest-modules src/transformers/models/bert/configuration_bert.py -sv --doctest-continue-on-failure
```
3. **Cleanup git status**
```bash
git checkout -- .
```
to clean up the changes in step 1.
# Ready (or not)?
If all tests pass, you can commit, push and open a PR 🔥 🚀 , otherwise iterate the above steps 💯 !
| 10-11-2022 10:03:33 | 10-11-2022 10:03:33 | I'd like to work on this; I'll start with YOLOS and open a PR :)<|||||>I'll take on Whisper!<|||||>I'll work on Beit!<|||||>FYI Bart #19524 and Albert #19541 are already done :) <|||||>I'll take on GPT2 next!<|||||>I can take imageGPT <|||||>i'll work on yoso
<|||||>I'll work on
- RoBERTa
- ViT
- DeiT
- Reformer
and open up PR soon :)<|||||>Also will raise for Transformer-XL !
<|||||>I'll work on bloom<|||||>I'll work on ctrl<|||||>Hi @ydshieh, I am new here and want to contribute to this issue. Can you please help me to find the remaining files? Thanks,<|||||>> Hi @ydshieh, I am new here and want to contribute to this issue. Can you please help me to find the remaining files? Thanks,
Hi @SD-13 You can check the latest file
https://github.com/IzicTemi/transformers/blob/main/utils/documentation_tests.txt
Any configuration file that is not on that list (on `main` branch) **and [not claimed here yet, no PR opened yet]** are all welcome :-)
<|||||>Hey @ydshieh, could I fix multiple models in a single PR, or do I have to open a single PR for each fix?<|||||>I will be taking blenderbots as well
<|||||>> Hey @ydshieh, could I fix multiple models in a single PR, or do I have to open a single PR for each fix?
2 or 3 might be good. But don't take more in a single PR - the sprint is for everyone to contribute, so leave some to others :-)<|||||>> > Hi @ydshieh, I am new here and want to contribute to this issue. Can you please help me to find the remaining files? Thanks,
>
> Hi @SD-13 You can check the latest file https://github.com/IzicTemi/transformers/blob/main/utils/documentation_tests.txt
>
> Any configuration file that is not on that list (on `main` branch) **and [not claimed here yet, no PR opened yet]** are all welcome :-)
Thanks @ydshieh , that was helpful. I can take `vision_text_dual_encoder.py`<|||||>Hey @ydshieh, I am getting this error

can you please help me to understand the reason and fix it? Thanks,<|||||>@SD-13 Could you open a PR (even if it is not complete yet), and post the command you run and the complete error message in that PR? Using image is not very convivence to search and debug 🙏 <|||||>> @SD-13 Could you open a PR (even if it is not complete yet), and post the command you run and the complete error message in that PR? Using image is not very convivence to search and debug pray
Yeah true. I created [this](https://github.com/huggingface/transformers/pull/19580) PR and let's leave those errors since all checks are passing. Thanks, <|||||>I will also take `time_series_transformer.py`, `vision_encoder_decoder.py`, and `trajectory_transformer.py`. Thanks,<|||||>took up blenderbot_small too.
<|||||>I'll take on
- SEW
- SEW-D
- Swin
- Swin V2
- UniSpeech
and will open up PR :)<|||||>Hi @ydshieh , I want to contribute to this issue. Can you please help me to find the remaining files?<|||||>> Hi @ydshieh , I want to contribute to this issue. Can you please help me to find the remaining files?
Hey @SaurabhBudhwani26 , please check [this](https://github.com/huggingface/transformers/issues/19487#issuecomment-1277518647). I hope it will be helpful. Thanks, <|||||>I'll work on `configuration_visual_bert.py`<|||||>I will work on:
- `big_bird`
- `bigbird_pegasus`<|||||>I will work on
- `configuration_xlm_roberta.py`
- `configuration_xlm_roberta_xl.py`<|||||>Can I work on this?
@ydshieh <|||||>I will work on **`flava`**<|||||>Now, I will work on `Longformer`, `Pegasus` and `VisualBERT`.<|||||>@AShreyam You didn't open a PR, and you merged your branch into your own main branch.<|||||>I am really sorry for the inconvenience.<|||||>Here is the actual pull request. Thank You :)<|||||>Work on `configuration_big_bird.py`<|||||>I'll take on
- LeViT
- DistilBERT
- ResNet
and open up PR :)<|||||>I'll take on CodeGen<|||||>I'll take on Data2Vec ones :)<|||||>I'll take on conditional_detr<|||||>I'll take on realm
<|||||>I'll take on convbert<|||||>I'll take on CLIP =)<|||||>I'll take XLNet<|||||>I can take the wav2vec2 ones if that is ok, is someone else working on those?<|||||>I'll take XLM<|||||>I'll take cvt<|||||>I'll take `pegasus` and `pegasus_x`<|||||>I will take `fnet` and `flava`.<|||||>@ydshieh Added config files for convnext<|||||>Oh it will close this ...?
The previous merged PRs didn't close this issue. Strange!<|||||>@ydshieh that's because there was "Fixes https://github.com/huggingface/transformers/issues/19487" in the description of the PR :)
"Fixes", like "close" or "fix" will close the issue when the PR is merged.<|||||>I would take `gpt_neo` , `gpt_neox_japanese` and `gpt_neox`<|||||>I'll take on
- SpeechToTextTransformer
- SpeechToTextTransformer2
- SqueezeBERT<|||||>I would like to work on `openai` and `opt`<|||||>I will take `mbart` and `mctct`<|||||>I will work on `layoutlm` , `layoutlmv2` , `layoutlmv3`<|||||>I will work on ELECTRA<|||||>I will work on PoolFormer<|||||>I will work on PLBART<|||||>I will work on Nezha<|||||>I'll take maskformer<|||||>Hi, Can I have LayoutLMv2 and BERT<|||||>> Hi, Can I have LayoutLMv2 and BERT
Hi @rushic24 They have been done. You can check [this file](https://github.com/huggingface/transformers/blob/main/utils/documentation_tests.txt) and find other config files to work with 🤗 <|||||>I'll take fsmt next<|||||>While browsing the list of model configurations, I noticed that the DebertaConfig class does not have an example docstring section. Unsure if that is supposed to be like that, but just incase its not, I will add a PR to include the example docstring and maybe I can get some feedback from there.<|||||>I'll work on dpt<|||||>> DebertaConfig
That would be very nice, @Saad135 ! Thank you<|||||>I will take DeBERTa-v2 next<|||||>I can take camembert next<|||||>I can take DPR next<|||||>I can take DeformableDetrConfig next<|||||>Can I take timesformer next?<|||||>> Can I take timesformer next?
Sure! For the context, we decide not to use the tiny random model checkpoints anymore. If there are some downstream models which lack the checkpoint, we just not to provide the expected values.<|||||>Hello, I would like to take on gptj, longformer, and hubert<|||||>@ydshieh , may I share a list of models that are yet to be worked on?
<|||||>@elabongaatuo GPT-J is large, and our CI won't be able to run doctest with its checkpoints.
I think gptj, longformer, and hubert are all covered in
https://github.com/huggingface/transformers/blob/5f3ea66bc0c27ad2a8761fdf8489cf7d72257b93/utils/documentation_tests.txt
Feel free to check the modeling files that are not in the above file 🤗 if you want to work on it ❤️ . Thank you!<|||||>@ydshieh , thank you. m2m_100,llama and mvp don't have modeling files. a go ahead to work on them? <|||||>`llama` has no publicly available checkpoints on the Hub - no need to work on it.
For the other 2 files, you can run doctest against them. If they pass, you can simply add them to `documentation_tests.txt`.
Otherwise, we can discuss how to deal with the errors :-).
- we might need to use a community user's checkpoint
- or a checkpoint without real weights, and indicate this |
transformers | 19,486 | closed | 🚨 🚨 🚨 Fix CvT parameter initialization | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR aims to rectify the difference in parameter initialization between the HF implementation and the original implementation (microsoft).
- Initializes torch dense layer weights with trunc_normal instead of normal.
- Initializes cls_token with trunc_normal
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
As [discussed](https://github.com/huggingface/transformers/pull/18597#issuecomment-1271354673) @amyeroberts here's the PR regarding the changes for the CvT pytorch model 😊
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-11-2022 09:54:55 | 10-11-2022 09:54:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,485 | closed | [Doctest] Add `configuration_bert.py` | # What does this PR do?
Add `configuration_bert.py` to `utils/documentation_tests.txt` for doctest.
This PR will be used as a template for a new sprint. | 10-11-2022 09:31:58 | 10-11-2022 09:31:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,484 | closed | Update TF whisper doc tests | # What does this PR do?
Fixes doctests for the TF whisper model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 10-11-2022 09:10:42 | 10-11-2022 09:10:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,483 | closed | Fix issue #19300 | # What does this PR do?
This PR fixing issue #19300
<!-- Remove if not applicable -->
Fixes # (issue)
#19300
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
| 10-11-2022 08:22:29 | 10-11-2022 08:22:29 | @sgugger How do I get output_dir in on_train_end callback ? <|||||>@sgugger Never mind, Let me fix the failing tests. <|||||>Let me know if you need any help!<|||||>@sgugger The tests pass now. There was bug in my change and it is great that our tests caught it. <|||||>Thanks again for all your work on this! |
transformers | 19,482 | closed | Fix whisper for `pipeline` | # What does this PR do?
After the merge of #19378 , the feature extractor does not work with the `pipeline` function. This PR is the same as #19385.
| 10-11-2022 07:18:45 | 10-11-2022 07:18:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Before we merge here, let's try to have the following tests working:
- Automatic pipeline tests for dummy whisper model (as discussed offline)
- 2 slow pipeline tests (one for speech recognition, one for speech translation here: [https://github.com/huggingface/transformers/blob/10100979ed0594d4cfe1982cdfac9642a[…]/tests/pipelines/test_pipelines_automatic_speech_recognition.py](https://github.com/huggingface/transformers/blob/10100979ed0594d4cfe1982cdfac9642a68e473e/tests/pipelines/test_pipelines_automatic_speech_recognition.py#L54)<|||||>We were missing a `_CHECKPOINT_FOR_DOC`, so I added a warning when the tests are skipped.
It seems a little bit problematic as if it is unused, `quality` will fail (and in our case, I had to change the code to use it 😃 )
Other models that are not tested :
`SpeechEncoderDecoderModel, Speech2TextForConditionalGeneration`, which are also just missing the `_CHECKPOINT_FOR_DOC`(see [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/speech_to_text/modeling_speech_to_text.py#L41) ) |
transformers | 19,481 | closed | Making Lxmert Tokenizer independent from bert Tokenizer | # What does this PR do?
Fixes #19303
@sgugger can you review this PR? | 10-11-2022 05:59:04 | 10-11-2022 05:59:04 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,480 | closed | [INT8] BLOOM series model loading back issue | ### System Info
8x A100 GPUs with CUDA 11.3 driver
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Use the following script to save a INT8 quantized and try to load it back.
```
import os
import torch
import logging
import math
from transformers import AutoConfig, pipeline, AutoModelForCausalLM, AutoTokenizer
def get_max_memory_per_gpu_dict(dtype, model_name):
"""try to generate the memory map based on what we know about the model and the available hardware"""
# figure out the memory map - the minimum per gpu required to load the model
n_gpus = torch.cuda.device_count()
try:
# model_params calculation, as we don't have a model yet to do:
# model_params = sum(dict((p.data_ptr(), p.numel()) for p in model.parameters()).values())
config = AutoConfig.from_pretrained(model_name)
h = config.hidden_size
l = config.n_layer
v = config.vocab_size
# from https://github.com/bigscience-workshop/bigscience/tree/6917a3b5fefcf439d3485ca184b4d9f6ab605150/math#model-sizing
model_params = l * (12 * h ** 2 + 13 * h) + v * h + 4 * h
except:
logging.info(f"The model {model_name} has a broken config file. Please notify the owner")
raise
if dtype == torch.int8:
bytes = 1
else:
bytes = torch.finfo(dtype).bits / 8
param_memory_total_in_bytes = model_params * bytes
# add 5% since weight sizes aren't the same and some GPU may need more memory
param_memory_per_gpu_in_bytes = int(param_memory_total_in_bytes / n_gpus * 1.10)
logging.info(f"Estimating {param_memory_per_gpu_in_bytes / 2 ** 30:0.2f}GB per gpu for weights")
# check the real available memory
# load cuda kernels first and only measure the real free memory after loading (shorter by ~2GB)
torch.ones(1).cuda()
max_memory_per_gpu_in_bytes = torch.cuda.mem_get_info(0)[0]
if max_memory_per_gpu_in_bytes < param_memory_per_gpu_in_bytes:
raise ValueError(
f"Unable to generate the memory map automatically as the needed estimated memory per gpu ({param_memory_per_gpu_in_bytes / 2 ** 30:0.2f}GB) is bigger than the available per gpu memory ({max_memory_per_gpu_in_bytes / 2 ** 30:0.2f}GB)"
)
max_memory_per_gpu = {i: param_memory_per_gpu_in_bytes for i in range(torch.cuda.device_count())}
print("Max memory per gpu:", max_memory_per_gpu)
return max_memory_per_gpu
def load_model():
world_size = torch.cuda.device_count()
model_name = "bigscience/bloom"
logging.info(f"Using {world_size} gpus")
logging.info(f"Loading model {model_name}")
tokenizer = AutoTokenizer.from_pretrained(model_name)
dtype = torch.int8
kwargs = dict(
device_map="auto",
max_memory=get_max_memory_per_gpu_dict(dtype, model_name),
)
logging.info("Using `load_in_8bit=True` to use quanitized model")
kwargs["load_in_8bit"] = True
model = AutoModelForCausalLM.from_pretrained(model_name, **kwargs)
return model, tokenizer
model, tokenizer = load_model()
model.save_pretrained("int8_model/", max_shard_size="8GB")
```
When loading from the directory, having the error on:
```
RuntimeError: Only Tensors of floating point dtype can require gradients
```
During the initialization of the model.
```
import torch
import torch.distributed as dist
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
model_name = 'int8_model/'
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.int8)
```
### Expected behavior
The loading should pass. Looking for a workaround on it... | 10-11-2022 05:11:36 | 10-11-2022 05:11:36 | Maybe @sgugger has some insights<|||||>It's hard to know which part fails without the whole traceback. I suspect it's when we set the default dtype to `torch_dtype`, which only works for floating dtypes. If that's the case, there is a probably a workaround possible by only setting the default dtype when the `torch_dtype` passed is a floating type.
Also cc @younesbelkada since it's related to int8 format.<|||||>Hey @lanking520 !
Thanks for your issue 💪
Let's try to debug this step by step. I suggest first to make your script run on the model `bigscience/bigscience-small-testing` - when running your script I got incorrect max_memory maps, so I had to overwrite the `max_memory` dict with the following `max_memory={0:"10GB", 1:"10GB"},` (I am running my tests on 2x NVIDIA T4).
After that, the loading script gives me the following error:
```
│ /home/younes_huggingface_co/debug_issues/code/transformers/src/transformers/modeling_utils.py:10 │
│ 49 in _set_default_torch_dtype │
│ │
│ 1046 │ │ `torch.int64` is passed. So if a non-float `dtype` is passed this functions will │
│ 1047 │ │ """ │
│ 1048 │ │ if not dtype.is_floating_point: │
│ ❱ 1049 │ │ │ raise ValueError( │
│ 1050 │ │ │ │ f"Can't instantiate {cls.__name__} model under dtype={dtype} since it is │
│ 1051 │ │ │ ) │
│ 1052 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Can't instantiate BloomForCausalLM model under dtype=torch.int8 since it is not a floating point dtype
```
The "hack" as @sgugger suggested is to "force-load" the weights in a floating point format - if you run:
```
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto")
```
You can check that the weights are indeed `int8` weights but casted in half-precision
```
>>> model.transformer.h[0].mlp.dense_h_to_4h.weight
Parameter containing:
tensor([[ 96., -90., -9., ..., 36., -25., -25.],
[ 0., -11., -51., ..., 9., 20., 38.],
[ -8., 8., 2., ..., 36., -88., -12.],
...,
[ -6., 33., -41., ..., -32., -18., -45.],
[ -11., -43., -34., ..., -14., -1., -50.],
[ -42., 44., 108., ..., 80., -119., 54.]], dtype=torch.float16,
requires_grad=True)
```
However, please note that even if you manage to run an inference with these weights, you will not be able to retrieve the same accuracy / performance than the 8-bit model that is created by `load_in_8bit=True` from the `fp16` model. This is because of how the `Linear8bitLt` layer is constructed.
The crucial components of this module are the quantization statistics that are stored in[ `self.state.SCB` ](https://github.com/TimDettmers/bitsandbytes/blob/b844e104b79ddc06161ff975aa93ffa9a7ec4801/bitsandbytes/nn/modules.py#L246). The problem when saving the `state_dict` from a `Linear8bitLt` is that it does not save these statistics that are needed at inference. So when you will load 8bit weights, the module will compute new quantization statistics based on the `int8` weights - which will lead to wrong results and computations. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi all - I'm also trying to save and re-load a BLOOM model in 8-bit format, see [https://github.com/TimDettmers/bitsandbytes/issues/80](https://github.com/TimDettmers/bitsandbytes/issues/80).
I'm quite new to the topic and not sure I'm able to follow everything @younesbelkada mentioned, but my understanding is that this is not possible yet, is that correct? |
transformers | 19,479 | closed | Fix `OPTForQuestionAnswering` doctest | # What does this PR do?
The checkpoint has no QA head, so we need to set seed and change the expected values. | 10-11-2022 04:07:22 | 10-11-2022 04:07:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,478 | open | openai whisper ASR pytorch to tflite | ### Model description
I'm trying to figure out how to create tflite models(int8/float32 ) for OpenAI->Whisper ASR model (Tiny.en.pt)
Somehow below generated tflite file getting crashed while running inference
https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/tinynn_pytorch_to_tflite_int8.ipynb
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | 10-11-2022 02:13:54 | 10-11-2022 02:13:54 | @patrickvonplaten @sgugger @amyeroberts @ArthurZucker :Could anyone of you help me on this ?<|||||>Below notebopok generates tflite file ,however i have not validated with real speech input
https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/tflite_from_huggingface_whisper.ipynb<|||||>Can anyone help me to run inference on this notebook to validate generated tflite file <|||||>@ydshieh Could you please help me to validate tflite file that got generated using below google notebook
https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/tflite_from_huggingface_whisper.ipynb<|||||>@nyadla-sys The TF Whisper is just released, maybe you can try with it.
However, our team is still working on some TensorFlow model saving issues, so I don't know if the conversion/inference will work out of the box.<|||||>@ydshieh ran inference on converted tflite file and it doesn't work.<|||||>@ydshieh
I could generate encoder int8 tflite model from openai->whisper(pytorch) and ran inference.
https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/generate_tflite_from_whisper.ipynb
can someone help me to generate int8 decoder tflite model from openai->whisper(pytorch) and run inference? |
transformers | 19,477 | closed | Adding the state-of-the-art contrastive search decoding methods for the codebase of generation_utils.py | # Adding the state-of-the-art contrastive search decoding method for the `generation_utils` codebase
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #19182
In this PR, I add the source codes of our proposed state-of-the-art decoding methods for the off-the-shelf neural text generation models. The main changes are in the following files: (1) `src/transformers/generation_utils.py`; (2) `examples/pytorch/text-generation/run_generation_contrastive_search.py`. To run the test script, please follow these commands:
```bash
cd examples/pytorch/text-generation;
CUDA_VISIBLE_DEVICES=0 python run_generation_contrastive_search.py --model_type=gpt2 --model_name_or_path=gpt2-large
```
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] The PR has been well discussed in [19182](https://github.com/huggingface/transformers/issues/19182)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? Yes, I have written the test scripts for the [contrastive search](https://github.com/gmftbyGMFTBY/transformers/blob/csearch-pr-v2/examples/pytorch/text-generation/run_generation_contrastive_search.py)
## Who can review?
According to the suggestions of @gante, @patrickvonplaten and @sgugger can review this PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 10-11-2022 01:24:22 | 10-11-2022 01:24:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @patrickvonplaten context: this is the implementation by the authors of [this NeurIPS paper](https://arxiv.org/abs/2202.06417), as first proposed in #19182 -- a new generation strategy with very interesting results!<|||||>Hi, @sgugger, thank you so much for your suggestions. I will fix these problems quickly!<|||||>Hello @sgugger, is there any document or introduction for auto-APIs?<|||||>The [documentation](https://huggingface.co/docs/transformers/model_doc/auto) would be the place to start. You can also look at all other examples!<|||||>Hello, @sgugger, I have fixed the problems based on your valuable suggestions! Besides, I have updated the test scripts to the auto-APIs of inference.
The command line to run this test script can be found in its docstring.<|||||>Hello @sgugger, I have fixed the problems based on your suggestions.<|||||>Hey @gmftbyGMFTBY,
Super nice PR! Contrastive search decoding would be a great addition to Transformers. I feel a bit uneasy of the logic in src/transformers/generation_contrastive_search.py and would prefer to not add a new file here and instead try to adapt the code more to our existing design (try to make use of logits processor, call the model **only** in the loop and not before).
We usually haven't added any new files for generation and molding the design more into what already exists would help a lot with maintainability.
As an example for constrained decoding, we sadly don't manage to maintain the function anymore (see https://github.com/huggingface/transformers/pull/17920) because the code was too different & too hard for us to maintain (IMO).
If possible I'd advocate quite strongly for trying to make the PR a bit more compatible with our current design (it shouldn't require much work IMO).
@gante @sgugger what do you think? <|||||>I don't know the generate code well enough to evaluate how easy/hard it would be for this PR to fit in the existing design. I solely gave my approval based on the fact it was mimicking something that existed with constraint decoding. Sorry, I wasn't aware it wasn't maintained anymore.
Will know for the next time :-)
<|||||>@patrickvonplaten replying here since I believe the multiple important points you raised are related. This will be a bit long, but bear with me - I think we can improve the quality of the PR before it gets merged 🙌
1. `ContrastiveDecodingOneStepFast` (which should be snake cased, perhaps into `contrastive_decoding_step`) being a stand-alone function - it could be part of the loop, yes, but this is a long operation that in essence replaces picking the argmax (in `greedy_search`) or sampling (in `sample`). I agree it should not be public, but I believe we would stand to gain from a readability standpoint if we separate it somehow -- perhaps a private method in the MixIn or an inner function to `contrastive_search`? These two options would also mean that we don't need to pass `model` as an argument, which is indeed awkward.
2. `ranking_fast` being a `LogitsProcessor` - I would disagree here, despite being an operation that processes logits. In addition to breaking the fairly stable `call()` API with two new tensor inputs (`context_hidden` and `next_hidden`), it can only be used with `contrastive_search`. As such, I think we would only be adding a confusing `LogitsProcessor` to the public API.
3. Despite the above, I agree that we should pass and compute the logits processors before the decoding function, like the other `generate` methods -- this was an oversight on my end at review time! As for `top_k`, I have no strong feelings - on one hand, `contrastive search` is not meant to be used with other `LogitWarpers` (like `TemperatureLogitsWarper`), on the other hand there is probably no harm in using `LogitWarpers`.
(now the most challenging issue in design, IMO)
4. A naive implementation of `contrastive_search` needs two sets of forward passes per token -- once to get the `top_k` candidate tokens, another to get the future hidden states for each candidate (to be used as a filter). We can avoid this 2x cost if we pipe future `past_key_values` corresponding to the selected candidate into the actual `past_key_values`. It does mean, however, that the first token requires an additional forward pass. The current code does it by placing `prepare_inputs_for_generation()` (and a few other ops) in unconventional places, before the loop and at the end of the loop. Perhaps it would be better if we kept the original structure, but added an `if` at the start loop -- if we are in the first iteration, do this set of operations.
WDYT about these 4 points?<|||||>Thanks for the summary @gante !
Regarding the 4 points above:
1.) I think the `ContrastiveDecodingOneStepFast` is the core of the contrastive generation algorithm and I don't see any advantage having it it's own "sub-function" (it'll never be called from another function than `contrastive_search`). Also, at the moment, the method returns 6 tensors and accepts 8 or so arguments so it's much more than just replacing the greedy argmax operation IMO. To begin with, I think it'd be very helpful to just copy-paste all the code into the `contrastive_search` and then later down the road I think we could see if parts of it could be moved out (don't think it's necessary though).
2.) I think we would have the following gains to having it implemented as a logit processor:
- We understand the method better & easier to maintain. If we mold `ranking` fast into a logit processor, it's much easier to understand it
- Very easy to test this method
- It is to me a method that processes logits -> so logically it should be a logits processor to me
- Now regarding the API, IMO `logits_processor` inputs are not just restricted to `input_ids` and `scores`. IMO, the API is rather (`input_ids`, `scores`, <other-args-that-are-needed>) => `scores` . Also, we've already "broken" the "input_ids" & "scores" - only API here: https://github.com/huggingface/transformers/blob/d2e5b19b821f0cf43c7cf4f01be5faa1cb20aa64/src/transformers/generation_utils.py#L2989
=> I understand your point here and ok for me to not make it a logits processor, but IMO it would be better/cleaner
3.) Yes I think it's very important to align the structure of the generate method as much as possible to the other ones
4.) Totally fine for me to have an extra forward pass in the beginning, it's just important that we make the structure the same as greedy or sample (I don't have a problem with if statements here at all). For maintenance and also to understand the method, it's very important IMO that one could compare greedy search and constrastive search line-by-line and then see quickly how the two methods differ<|||||>Thank you for your valuable responses! I will revise this PR according to your reviews.<|||||>@gmftbyGMFTBY let's go with the core of @patrickvonplaten's suggestions. Patrick and I also talked on Slack to align a few details regarding `ranking_fast` :D
Here's a summary of the main changes we are requesting, to ensure the code you're adding remains easy to maintain:
1. Move the contents of `ContrastiveDecodingOneStepFast` to where the function is called, as opposed to being a function call;
2. Keep `ranking_fast` as it is now (i.e. NOT a logits processor);
3. Let's apply the logits processors at the start of each iteration, and move `top_k` to the logits warpers (like `sample` [does it](https://github.com/huggingface/transformers/blob/d2e5b19b821f0cf43c7cf4f01be5faa1cb20aa64/src/transformers/generation_utils.py#L2059)). This ensures that there are minimal differences between generation strategies;
4. Let's rearrange the order of operations such that all model forward passes happen inside the generation loop (with an `if` for the operations that are only supposed to happen on the first iterations).
Here is a diagram with the expected code structure, to ensure we are all on the same page: https://miro.com/app/board/uXjVPMiVFFg=/
Finally, thank you for your cooperation with us 🙌 This back and forth may be a bit frustrating, but it will ensure your contribution will be long-lived!
<|||||>@gante Nice! but I just saw your message after I finished writing the `logits_processor` corresponding to `ranking_fast` function.
The followings are the implementation of the `ranking_fast`'s `logits_processor`:

It is initialized in `_get_logits_processor` function:

and will be called:

Can I keep this implementation or just still employ the `ranking_fast` function?
<|||||>@gante Oh, I got your reason about not using the `logits_processor`. I will follow the instruction in the `sample` function!<|||||>The revisions have been updated!<|||||>Hello, @gante, I have updated the PR based on your suggestions.<|||||>Oh, I am still working on the integration test.<|||||>@gmftbyGMFTBY you probably have to add the `@slow` decorator to the test, and run it locally with `RUN_SLOW=1 py.test (...)` to confirm that it is working.
Our CI doesn't run tests with `@slow` on push (and fails if the test doesn't have the decorator and is actually slow), but we run them every 24h and track them internally :)<|||||>Ok, I got it!
<|||||>Okay, I am working on it! Thanks a lot for your reviews!<|||||>BTW @gmftbyGMFTBY,
Just read a through your extremely nice issue! It seems like you experimented with OPT as well, so maybe let's add a test for OPT as well then ? :-) OPT's `past_key_values` are slightly different compared to GPT2's `past_key_values` so maybe instead of adding a test for GPT-J and GPT-2, it would make more sense to add a test for OPT in addition to GPT2?
Also, if the paper is only concerned with open-ended generation (so less with encoder-decoder architectures), I'm also totally fine with **not** testing for T5 and BART (it's a nice to have, but if it takes too much time and it's not too important - happy to skip it!).
Regarding the fast dummy test, could you maybe make use of those dummy models:
- https://huggingface.co/hf-internal-testing/tiny-random-gpt2
- https://huggingface.co/hf-internal-testing/tiny-random-gptj
- https://huggingface.co/hf-internal-testing/tiny-random-t5
- https://huggingface.co/hf-internal-testing/tiny-random-bart
The tests colud look very similar to:
https://github.com/huggingface/transformers/blob/71ca79448cd334970fa2893f4faaa094ca13ca6f/tests/generation/test_generation_utils.py#L2053
just much shorter, *i.e.* they only need to test for shape equality. <|||||>Yeah, we have already tested the OPT models, and it works fine. I will supply more tests to the pre-trained models that you mentioned.<|||||>@patrickvonplaten more tests about these models are added:
* gpt2-large
* gpt-j (EleutherAI/gpt-j-6B)
* opt (facebook/opt-6.7b)
* BART (facebook/bart-large-cnn)
* T5 (flax-community/t5-base-cnn-dm)
These tests are passed successfully. Can you do the final check about this PR?<|||||>Thank you for being part of this process @gmftbyGMFTBY 🙌 All queries have been addressed and the PR looks in a good state, merging! <|||||>@gante @patrickvonplaten @sgugger Wow, Thank you very much for your help and support. Love huggingface team! <|||||>@gante @patrickvonplaten @sgugger -- Many thanks for your kind help throughout the process! It means a great deal to me and @gmftbyGMFTBY. Huggingface is the best!<|||||>Great work @gmftbyGMFTBY and @yxuansu, thanks for bearing with us through the PR :-) |
transformers | 19,476 | closed | [REIMPLEMETATION] Vision encoder decoder Onnx conversion | # What does this PR do?
This PR is a reimplementation of Vision Encoder Decoder Onnx conversion as a Seq2Seq Model like documentation explains: [Encoder-decoder models inherit from OnnxSeq2SeqConfigWithPast](https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/onnx#transformers.onnx.OnnxSeq2SeqConfigWithPast)
The PR #19254 didn't follow this classes. You have several examples on Repo on how to use it: https://github.com/huggingface/transformers/blob/v4.22.2/src/transformers/models/mbart/configuration_mbart.py
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@ChainYo for OnnxConfigs
@lewtun & @sgugger for approving PR: #19254 | 10-10-2022 20:27:40 | 10-10-2022 20:27:40 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19476). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @WaterKnight1998 thanks for your PR!
Indeed PR #19254 splits the model in separate encoder / decoder pieces, which differs to other seq2seq models that are currently implemented as a single ONNX graph. The main reason is the following:
* To speed up the decoding process, it is more efficient to have a single pass through the encoder, followed by N passes through the decoder.
* To support the caching of past key-value pairs, it is more efficient to have a separate decoder
Do you happen to have a latency benchmark for the `VisionEncoderDecoder` export that compares your PR vs the current implementation? I think this would be the axis on which we'd consider incorporating your changes, but I would be surprised if a single graph can beat the decomposed ones.
You can find more information in our `optimum` library, where we'll be implementing the pipeline for inference: https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#export-and-inference-of-sequencetosequence-models
cc @mht-sharma re the original implementation<|||||>> Hi @WaterKnight1998 thanks for your PR!
>
> Indeed PR #19254 splits the model in separate encoder / decoder pieces, which differs to other seq2seq models that are currently implemented as a single ONNX graph. The main reason is the following:
>
> * To speed up the decoding process, it is more efficient to have a single pass through the encoder, followed by N passes through the decoder.
> * To support the caching of past key-value pairs, it is more efficient to have a separate decoder
>
> Do you happen to have a latency benchmark for the `VisionEncoderDecoder` export that compares your PR vs the current implementation? I think this would be the axis on which we'd consider incorporating your changes, but I would be surprised if a single graph can beat the decomposed ones.
I will try to create it, but I didn't take into account what you mention. It makes sense to just do a single pass in the encoder
> You can find more information in our `optimum` library, where we'll be implementing the pipeline for inference: https://huggingface.co/docs/optimum/onnxruntime/modeling_ort#export-and-inference-of-sequencetosequence-models
>
I am looking forward for this implementation, I am thinking on running Donut model in production and it will be very cool, actually the inference times are pretty bad. I have an open PR for ONNX conversion: #19401
<|||||>@lewtun is this the pipeline that you mention: https://github.com/huggingface/optimum/blob/996f209147a466c7ecf5bfb29c9fd2e9831ea3a7/optimum/onnxruntime/modeling_seq2seq.py#L154?<|||||>> @lewtun is this the pipeline that you mention: https://github.com/huggingface/optimum/blob/996f209147a466c7ecf5bfb29c9fd2e9831ea3a7/optimum/onnxruntime/modeling_seq2seq.py#L154?
Yes @WaterKnight1998, in this implementation the encoder and decoder part are exported separately and the inference is performed using ORT.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@lewtun @sgugger please reopen it <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 19,475 | closed | [Swin] Replace hard-coded batch size to enable dynamic ONNX export | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR tweaks the modeling code of `swin` to enable dynamic batch sizes with the ONNX export. With this fix, the ONNX slow tests for this model now pass, including the slow tests for the original PyTorch model:
```
This passes
RUN_SLOW=1 pytest -x -sv tests/models/swin/test_modeling_swin.py
This also passes
RUN_SLOW=1 pytest -x -sv tests/onnx/test_onnx_v2.py -k "swin"
```
Since this change also impacts other models, I've also checked the modeling slow tests pass for:
- [x] `maskformer`
- [x] `donut_swin`
- [x] `swin_v2`
Related to https://github.com/huggingface/transformers/issues/17476
| 10-10-2022 20:06:13 | 10-10-2022 20:06:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(a bit off-topic, but still related question)
Viewing the issue and the fix provided this PR, I was thinking we would have a lot of the same errors due to this `hard-coded batch size`. However, when I check `bert`:
https://github.com/huggingface/transformers/blob/10100979ed0594d4cfe1982cdfac9642a68e473e/src/transformers/models/bert/modeling_bert.py#L962
and
https://github.com/huggingface/transformers/blob/10100979ed0594d4cfe1982cdfac9642a68e473e/src/transformers/models/bert/modeling_bert.py#L968
but the onnx tests still pass for `bert`. Are `batch_size` and `seq_length` not hard-coded here? Just wondering if @lewtun has already some insight regarding this.
<|||||>> Is the issue caused by the changes in #19255? More precisely, from (newly added code) https://github.com/dwyatte/transformers/blob/949683675d83cc38620106626822279cd45b076b/src/transformers/onnx/convert.py#L368
>
> The error shows `Outputs values doesn't match between reference model and ONNX exported model` - it must be non-trivi al to figure out this is coming from the shape things! How are you able to find out 💯 ? Is there some tool we can use to check things (tensor values/shape) when running onnx inference?
Yes, this issue was surfaced by #19255, which implemented a stronger validation test on exported ONNX models. Basically, it generates the ONNX graph using dummy data with one batch size `b`, and then validates the forward pass with a different `b'`.
The reason it can be non-trivial to figure out when an export fails to have agreement between the PyTorch / ONNX models is that ONNX traces a graph based on dummy data, and this tracing can be incorrect if there are data-dependent flow statements (Swin in particular has a lot of these if/else statements). Currently , the best tool I know of is to visualise the graph with [Netron](https://netron.app/) and manually inspect for discrepancies.
> Viewing the issue and the fix provided this PR, I was thinking we would have a lot of the same errors due to this `hard-coded batch size`. However, when I check `bert`:
I think in those cases we don't hit a problem because `batch_size` is only used to create the attention mask when none is provided. Since our dummy input provides an attention mask, this flow in the graph is never traced AFAICT |
transformers | 19,474 | closed | Sample method doesn't work for mt5 architecture | ### System Info
- `transformers` version: 4.21.3
- Platform: Linux-5.4.0-122-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.12
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
MT5 sample method @patrickvonplaten
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
The following code (from https://huggingface.co/docs/transformers/v4.22.2/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.sample)
```python
from transformers import (
MT5Tokenizer,
MT5ForConditionalGeneration,
LogitsProcessorList,
MinLengthLogitsProcessor,
TopKLogitsWarper,
TemperatureLogitsWarper,
StoppingCriteriaList,
MaxLengthCriteria,
)
import torch
tokenizer = MT5Tokenizer.from_pretrained("google/mt5-base")
model = MT5ForConditionalGeneration.from_pretrained("google/mt5-base")
# set pad_token_id to eos_token_id because GPT2 does not have a EOS token
input_prompt = "Today is a beautiful day, and"
input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids
# instantiate logits processors
logits_processor = LogitsProcessorList(
[
MinLengthLogitsProcessor(15, eos_token_id=model.config.eos_token_id),
]
)
# instantiate logits processors
logits_warper = LogitsProcessorList(
[
TopKLogitsWarper(50),
TemperatureLogitsWarper(0.7),
]
)
stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)])
torch.manual_seed(0)
outputs = model.sample(
input_ids,
logits_processor=logits_processor,
logits_warper=logits_warper,
stopping_criteria=stopping_criteria,
)
tokenizer.batch_decode(outputs, skip_special_tokens=True)
```
results in `ValueError: You have to specify either input_ids or inputs_embeds`
Tried to explicitly set decoder_input_ids but it didn't help
### Expected behavior
Tried to use sample method instead of generate and failed to get rid of value error | 10-10-2022 19:44:47 | 10-10-2022 19:44:47 | moreover, if I try to use generate with num_beams=1 and do_sample=True as it says in documentation, I got really weird scores:
inp = batch # result of next(item(torch.utils.Dataloader))
generate_results = model.generate(
input_ids=inp['input_ids'].to(model.device),
attention_mask=inp['attention_mask'].to(model.device),
output_scores=True,
num_beams=1,
do_sample=True,
return_dict_in_generate=True,
)
```
SampleEncoderDecoderOutput(sequences=tensor([[ 0, 259, 264, 66689, 31018, 2793, 79143, 1334, 259,
3205, 1264, 1285, 2776, 179124, 192823, 149529, 13647, 260,
1, 0],
[ 0, 563, 3392, 259, 37446, 4205, 59633, 299, 19966,
259, 20364, 484, 261, 259, 21230, 332, 287, 9983,
9844, 260],
[ 0, 92868, 111621, 1498, 543, 68093, 259, 23886, 446,
22708, 261, 259, 3266, 259, 31989, 411, 4338, 3695,
18203, 388],
[ 0, 2553, 1049, 7584, 8608, 3671, 261, 892, 6888,
3647, 7077, 456, 23480, 729, 23761, 1128, 260, 1,
0, 0],
[ 0, 4343, 729, 79829, 261, 3553, 12799, 261, 259,
279, 33658, 433, 30643, 308, 116992, 34741, 259, 95298,
388, 10081],
[ 0, 259, 73394, 304, 459, 54650, 261, 1361, 24856,
730, 169977, 1, 0, 0, 0, 0, 0, 0,
0, 0],
[ 0, 25870, 261, 259, 109102, 261, 2477, 277, 270,
3256, 416, 259, 185416, 521, 260, 260, 4837, 609,
277, 263],
[ 0, 259, 74815, 688, 13986, 657, 425, 259, 46805,
7378, 259, 30821, 877, 816, 267, 1, 0, 0,
0, 0],
[ 0, 259, 279, 259, 74725, 2266, 259, 71145, 748,
274, 24186, 261, 37828, 324, 1, 0, 0, 0,
0, 0],
[ 0, 259, 176741, 11966, 425, 8790, 1344, 43078, 543,
259, 279, 259, 98503, 1400, 259, 30821, 274, 83935,
1633, 13637],
[ 0, 486, 10753, 263, 1432, 344, 1537, 1459, 27906,
1537, 2985, 261, 1866, 569, 6535, 339, 259, 45628,
281, 287],
[ 0, 336, 3031, 272, 277, 270, 1689, 4065, 288,
342, 714, 260, 1, 0, 0, 0, 0, 0,
0, 0]], device='cuda:0'), scores=(tensor([[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf]], device='cuda:0'), tensor([[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf]], device='cuda:0'), tensor([[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf]], device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -4.1323, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -6.2682, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[-0.6975, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf],
[-inf, -inf, -inf, ..., -inf, -inf, -inf]], device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -4.8971, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -7.8711, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -3.7007, -inf, ..., -inf, -inf, -inf],
...,
[-0.7802, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -6.0860, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -3.4524, -inf, ..., -inf, -inf, -inf],
[ -inf, -6.1068, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -6.8437, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -0.4513, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -4.2931, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, 7.0613, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -8.3437, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[14.4250, 1.6957, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -8.2820, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[15.0611, -inf, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[15.2473, -inf, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -0.5828, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[15.2603, 2.0967, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[ -inf, -2.9308, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[14.6447, 1.9284, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[ -inf, 4.6353, -inf, ..., -inf, -inf, -inf],
[ -inf, -8.9710, -inf, ..., -inf, -inf, -inf],
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -inf, -inf, ..., -inf, -inf, -inf],
[ -inf, -6.0908, -inf, ..., -inf, -inf, -inf],
[14.7882, 1.9284, -inf, ..., -inf, -inf, -inf]],
device='cuda:0'), tensor([[15.3742, 2.2133, -inf, ..., -inf, -inf, -inf],
[ -inf, -0.5673, -inf, ..., -inf, -inf, -inf],
[ -inf, -3.3424, -inf, ..., -inf, -inf, -inf],
...,
[ -inf, -2.2742, -inf, ..., -inf, -inf, -inf],
[ -inf, -6.5111, -inf, ..., -inf, -inf, -inf],
[15.1830, 2.0473, -inf, ..., -inf, -inf, -inf]],
device='cuda:0')), encoder_attentions=None, encoder_hidden_states=None, decoder_attentions=None, cross_attentions=None, decoder_hidden_states=None)
```<|||||>@gante or @ArthurZucker could you take this one? :-) <|||||>Hi @tatiana-iazykova 👋
Regarding the issue in the first post: the problem is that the docstring example (that you based your script on) only works for decoder-only models, and an encoder-decoder model is used. For encoder-decoder models, the input needs some additional processing, that `generate()` gracefully handles [here](https://github.com/huggingface/transformers/blob/f4ef78af543a166551889da8737cc3134a7d9dd3/src/transformers/generation_utils.py#L1281). While we *could* make `model.sample()` and other generation strategies handle this sort of cases, the code will quickly become a pain to maintain (input handling would have to be added everywhere), so I'm inclined not to make any attempt to fix and to redirect towards `model.generate()` instead :) (cc @patrickvonplaten -- maybe we should add a line in the docstrings to prioritize the use of `.generate()`?)
As for the second issue -- can you share a reproducible script? 🙏 The scores would look normal if a logits processor like top_k was used, but it doesn't seem to be the case 🤔 <|||||>Noted.
As for the second issue, the input was standard like the one specified here (https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb#scrollTo=kTCFado4IrIc)
and code for generate was the following:
```python
generate_results = model.generate(
input_ids=inp['input_ids'].to(model.device),
attention_mask=inp['attention_mask'].to(model.device),
output_scores=True,
num_beams=1,
do_sample=True,
return_dict_in_generate=True,
)
```<|||||>@tatiana-iazykova I see -- in the example you shared, `top_k` is not passed, so it inherits the [default value](https://huggingface.co/docs/transformers/v4.23.1/en/main_classes/text_generation#transformers.generation_utils.GenerationMixin.generate.top_k) (50). `top_k` sets the log probability of all but the K most likely tokens in `-inf`, which explains the numbers you see :)
As a counter example, you can turn `top_k` off by setting it to the size of the vocabulary:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "distilgpt2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
inputs = tokenizer("This is a simple test", return_tensors="pt")
generate_outputs = model.generate(
input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask'],
output_scores=True,
num_beams=1,
do_sample=True,
return_dict_in_generate=True,
top_k=tokenizer.vocab_size
)
print(generate_outputs)
```<|||||>I'm closing the issue as this is intended behavior, but feel free to reopen with further queries :) |
transformers | 19,473 | closed | Fix `XGLMModelLanguageGenerationTest.test_batched_nan_fp16` | # What does this PR do?
#18057 added this test to test running with fp16.
However, `from_pretrained(model_name, torch_dtype=torch.float16` seems **not able to change the dtype** for weights registered below:
https://github.com/huggingface/transformers/blob/a7bc4221c0c09857b30ac467e7de86d3f5a7c482/src/transformers/models/xglm/modeling_xglm.py#L168-L176
and `hidden_states` becomes again `float32` (because `position` is) at
https://github.com/huggingface/transformers/blob/a7bc4221c0c09857b30ac467e7de86d3f5a7c482/src/transformers/models/xglm/modeling_xglm.py#L715
and finally failed at `hidden_states = self.self_attn_layer_norm(hidden_states)` with
```bash
RuntimeError: expected scalar type Float but found Half
```
| 10-10-2022 18:22:57 | 10-10-2022 18:22:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger OK, let me check if I can do something for (not a real) weights defined by
```python
self.register_buffer("weights", emb_weights)
``` |
transformers | 19,472 | closed | Update `WhisperModelIntegrationTests.test_large_batched_generation` | # What does this PR do?
Update the expected values for `WhisperModelIntegrationTests::test_large_batched_generation`.
It is probably due to the different GPUs used.
See currently failing test [here](https://github.com/huggingface/transformers/actions/runs/3212658214/jobs/5251735170).
| 10-10-2022 18:04:43 | 10-10-2022 18:04:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,471 | closed | Suppress warning when using `DataCollatorForSeq2Seq` | ### Feature request
When using a fast tokenizer in the `DataCollatorForSeq2Seq`, currently the following warning is printed
```
You're using a T5TokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding.
```
As a temporary solution, what is the suggested way to suppress it?
Is there any plan to update `DataCollatorForSeq2Seq`?
Thanks a lot in advance for your help!
### Motivation
N/A
### Your contribution
N/A | 10-10-2022 17:55:04 | 10-10-2022 17:55:04 | cc @sgugger <|||||>Could you post a clear reproducer of the issue? Thanks a lot!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Could you post a clear reproducer of the issue? Thanks a lot!
When fine tuning a DialoGPT-medium model with a custom Dataset class as so:
```
class ConversationDataset(Dataset):
def __init__(self, tokenizer, file_path, block_size):
self.tokenizer = tokenizer
self.block_size = block_size
self.inputs = []
self.responses = []
with open(file_path, 'r', encoding="utf-8") as file:
lines = file.readlines()
for i in range(0, len(lines), 2):
if i + 1 < len(lines):
self.inputs.append(lines[i].strip().replace("input: ", ""))
self.responses.append(lines[i + 1].strip().replace("response: ", ""))
# Tokenize and pad inputs and responses in a single step
self.input_tensors = tokenizer(self.inputs, return_tensors='pt', padding=True, truncation=True, max_length=self.block_size)
self.response_tensors = tokenizer(self.responses, return_tensors='pt', padding=True, truncation=True, max_length=self.block_size)
```<|||||>This is being fixed by #23742 |
transformers | 19,470 | closed | CLI: add import protection to datasets | # What does this PR do?
Add import protection to datasets -- I've seen two or three recent issues where the authors can't post their env due to `datasets` failing to import on `pt-to-tf` ([example](https://github.com/huggingface/transformers/issues/19445)) | 10-10-2022 17:17:01 | 10-10-2022 17:17:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,469 | closed | Fix `FlaubertTokenizer.__init__` | # What does this PR do?
There was a tiny error in #19330
```python
do_lowercase=do_lowercase**kwargs,
``` | 10-10-2022 16:52:14 | 10-10-2022 16:52:14 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 19,468 | closed | Add warning in `generate` & `device_map=auto` & half precision models | # What does this PR do?
This PR fixes mainly 2 issues that are :
- https://github.com/TimDettmers/bitsandbytes/issues/42
- https://github.com/huggingface/transformers/issues/19445
This issue seems to be unrelated to `bitsandbytes` 8-bit models but slightly more tricky than that. When instantiating a model using `device_map=auto` (EDIT: regardless the `dtype`), the model returns the logits on the same device as the input. I think that this is expected since this is how `accelerate` builds its hooks for each module. So I don"t expect the fix to be done on `accelerate` but more on `transformers` side.
Therefore, if a user calls `generate` or just a simple forward function with a half-precision model that has been instantiated with `device_map=auto` and if the input is initially on the CPU, they may encounter unexpected behaviours such as `top-k-cpu not implemented for Half` since the sampling operations (`top_k`, `top_p`, etc) are done on the same device as the logits - that are in this specific case on CPU.
This PR addresses this fix by forcing the logits to be on the same device as the **first** module of the model (in case the model is sharded across multiple devices, it makes sense to have the `input_ids` to be on the `device` of the first module). The PR also adds a warning message, suggesting the user to explicitly put the `input_ids` on the same device type as the model.
The PR also adds a slow `bnb` test to make sure this situation will not happen in the future!
# How to reproduce the issue?
With this simple snippet you can reproduce the issue:
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
MAX_NEW_TOKENS = 128
model_name = 'gpt2'
text = """
Q: On average Joe throws 25 punches per minute. A fight lasts 5 rounds of 3 minutes.
How many punches did he throw?\n
A: Let’s think step by step.\n"""
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_ids = tokenizer(text, return_tensors="pt").input_ids
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map='auto',
torch_dtype=torch.float16
)
generated_ids = model.generate(input_ids, max_length=len(input_ids[0])+25, do_sample=True, top_p=0.7)
print(tokenizer.decode(generated_ids[0]))
```
Thanks!
cc @sgugger @ydshieh @Narsil | 10-10-2022 16:32:03 | 10-10-2022 16:32:03 | Thanks for the feedback @Narsil !
Regarding `accelerate` I am unsure about the magic that happens there and if it is fixable without breaking anything. If there is something I can fix from `accelerate` happy to open a PR there! Gently pinging @sgugger and @muellerzr to see how we can fix that from `accelerate`
I think that `next(self.parameters())` should do the trick too, regarding the ordering in pytorch, I have observed a similar phenomenon in https://github.com/huggingface/transformers/pull/18312 / when printing a module the output is sensitive to the order of each submodule. I think that `module.parameters()` uses the same logic as `print` -> it uses the `self._modules` attribute from [`nn.Module`](https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module).
So I believe that if a model has been defined correctly (ie, `Embedding` layer at the beginning of the module) `next(self.parameters())` should return the parameter of the `Embedding` layer. This is an example script I used to confirm my intuition but small details that I might be missing can change my hypothesis!
```
>>> model = nn.Sequential(
... nn.Embedding(1, 2),
... nn.Linear(2, 3),
... nn.Linear(3, 4),
... )
>>> list(model.parameters())[0]
Parameter containing:
tensor([[0.1374, 1.0764]], requires_grad=True)
```
But maybe there is a better way to do so! I'll check what is done on `accelerate`.
Also in the worst case we can just leave the warning message only, and force the user to pass an `input_ids` that is on the same device as the model!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This should be neither in Accelerate nor here in my opinion. There is a legitimate use case where you want to:
- make the forward passes on your GPU
- have the generation stuff happen on CPU
because you lack GPU memory (for instance). This is why Accelerate does not force the inputs to be on the same device as the model.
We can add a warning on the Transformer side, but we shouldn't force the inputs to change device. Just tell the user in case they did a mistake and that generation might be slow.<|||||>Perfect thanks!
Fine with this change! I have kept the warning message and reverted the force assignment in aaf6ecb4d2f8872204f7ad80b38f21ac04b7b267 Let me know what do you think ;) !
We should also question ourseleves whether we put the warning inside `sample` or more generally in `generate` before calling `sample` <|||||>Perfect thanks! Adapted the changes in 2a89fcd50e18d0da3bec56e878378f513bbed0c1 Will merge once it's green ;)
Thanks a lot @Narsil & @sgugger !<|||||>I am just reading this discussion for learning purpose, as you all know better than me on this stuff. But @younesbelkada, yoc mentioned
> When instantiating a model using device_map=auto and for half-precision models, the model returns the logits on the same device as the input.
I feel a bit confused: for models running in float32, **does the model return the logits on the same device as the input too**?
I know for float32, we don't have issue on CPU. But I am just wondering the relationship between `dtype` and `the same device`.<|||||>Thanks for bringing up my comment @ydshieh !
I think my comment is slightly misleading here, it indeed returns the logits on the same device as the `input_ids` even if the model is in float32. The thing is, some pytorch operations such as `topk` are supported under CPU for float32 but are not supported for float16. So there is no relationship between `dtype` and `the same device` - if you load a model using `device_map=auto` the forward pass of the model will return the output on the same device as the input!
Will update my comment for clarification! Let me know if anything else is unclear <|||||>No problem @younesbelkada. I was probably focusing on the words too much - bad (sometimes) habits :-)<|||||>Ahah don't worry at all! 💪 Agree that my previous statement was super confusing |
transformers | 19,467 | closed | Redundant normalisation of image and text features in OWL-ViT | ### Who can help?
@alaradirik
### Issue description
Hi,
Thank you for the codebase! As the title suggests, I think that in `modeling_owlvit.py` the image and text features are normalised twice while in the original codebase from Google Research they are normalised only once. In particular, in `modeling_owlvit.py` image and text features are normalised both in lines 1073-174 and in lines 1145-1146. On the contrary in the original code, in [https://github.com/google-research/scenic/blob/main/scenic/projects/owl_vit/layers.py](https://github.com/google-research/scenic/blob/main/scenic/projects/owl_vit/layers.py), the features are normalised only in lines 86-89 whereas in line 144 the normalisation parameter is set as `normalize=False` and there is a comment explicitly saying `Don't normalize image and text embeddings:`.
I think this is sensible as there is no reason for double normalisation which normally leads to performance degredation. Please let me know what do you think, and whether I'm wrong as I might be missing something. | 10-10-2022 16:12:58 | 10-10-2022 16:12:58 | Hi @ekazakos, you're right but the image embeddings are not normalized twice. `OwlViTModel.forward()` is called within `OwlViTForObjectDetectiob.image_text_embedder()` with `return_base_image_embeds=True`. This assures that we can retrieve both the umodified CLIP (OwlViTModel) embeddings and logits using the normalized features and also the unnormalized image embeddings (lines 1085-1087).
The reason we do this is OwlViT is trained in two stages: (1) training CLIP / OwlViTModel without any modifications and (2) training the object detection head and fine-tuning the base CLIP model.
Hope this helps!<|||||>Hi @alaradirik,
I suspected that this was the reason you did that. And thanks for the clarification, it's helpful! Yet, that implies that the text features are indeed normalised twice, no?<|||||>> Hi @alaradirik,
>
> I suspected that this was the reason you did that. And thanks for the clarification, it's helpful! Yet, that implies that the text features are indeed normalised twice, no?
Yes, you're right indeed and thanks for pointing it out! I'll be opening a fix PR shortly.<|||||>Glad I could help! Could you please let me know whether this boosts validation performance at all?
<|||||>
Hey @ekazakos, sorry for the delay! The issue will be fixed with this [PR](https://github.com/huggingface/transformers/pull/19712) but it doesn't affect the performance as double normalization yields the same results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.