repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 20,268 | closed | Adding doctest for `zero-shot-classification` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 11:08:37 | 11-16-2022 11:08:37 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20268). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,267 | closed | Complete doc migration | Reverts https://github.com/huggingface/transformers/pull/20125
Everything is handled on the doc-builder side now ๐ | 11-16-2022 10:58:56 | 11-16-2022 10:58:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20267). All of your documentation changes will be reflected on that endpoint.<|||||>LGTM, and thanks for the awesome work. I bag my courage to approve.
<|||||>well, I forgot to approve after leaving a comment, sorry. |
transformers | 20,266 | closed | Adding doctest for `visual-question-answering` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 10:55:56 | 11-16-2022 10:55:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20266). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,265 | closed | Adding doctest for `token-classification` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 10:46:04 | 11-16-2022 10:46:04 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20265). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,264 | closed | Adding doctest for `text-generation` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 10:29:38 | 11-16-2022 10:29:38 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20264). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,263 | closed | Improve pipeline testing | # What does this PR do?
Improve pipeline testing | 11-16-2022 10:29:11 | 11-16-2022 10:29:11 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20263). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,262 | closed | Adding doctest for `text-classification` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 10:17:40 | 11-16-2022 10:17:40 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20262). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20262). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,261 | closed | Adding doctest for `text2text-generation` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 10:10:39 | 11-16-2022 10:10:39 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20261). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,260 | closed | Adding a doctest for `table-question-answering` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 09:52:40 | 11-16-2022 09:52:40 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20260). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,259 | closed | Adding doctest for `question-answering` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 09:40:27 | 11-16-2022 09:40:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20259). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20259). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,258 | closed | Adding doctest for `object-detection` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 09:34:49 | 11-16-2022 09:34:49 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> I still think the `candidate_labels` argument/naming is not very aligned with zero-shot object detection but the PR looks good to me otherwise.
Here it's just an alias.
I'm really not sure I understand how it could be a bad name.
The content end up being within `label` section of the output, so saying they are `labels` doesn't seem all that shocking to me.
The fact that they are `candidate` is because there is no guarantee that you will be able to detect them.
Overall I find `candidate_labels` not that bad.
Why I think `text_queries` is not the best choice:
- It is not aligned with other `zero-shot` pipelines
- `text_queries` implies `text` but you're not looking for text within an image but readers might understand this:
```pipe(image, text_query="Restaurant")``` Are we looking for places where the text restaurant is written ?
- `queries` implies asking for information so I'm somehow expecting to always receive something, I feel it conveys less well the idea of "potentiality" than `canddiate`.
The main counterargument in the original PR was that `candidate_labels` was using `hypothesis_template`. I feel like it's not really a strong stance, since `hypothesis_template` isn't really meant to be used so much (it's kind of a power user tool). Even then, I don't think just because there is no `hypothesis_template` should prevent us from using the same name, if the name is readable, and serves the same purpose, then it's a good name, the rest is implementation detail imo.
Happy to hear your thoughts on why you think it fits better.<|||||>
> Here it's just an alias. I'm really not sure I understand how it could be a bad name.
>
> Happy to hear your thoughts on why you think it fits better.
@Narsil my main concern is that zero-shot-classification pipeline selects one of the `candidate_labels` to assign a final class / label to the image, whereas zero-shot-object-detection queries the image for each query and might find instances of all query objects. It'd be less confusing to users if we add a better docstring example -> using an image where both candidate_labels=["head", "bird"] are detected instead of just "bird".
<|||||>> @Narsil my main concern is that zero-shot-classification pipeline selects one of the candidate_labels to assign a final class / label to the image, whereas zero-shot-object-detection queries the image for each query and might find instances of all query objects. It'd be less confusing to users if we add a better docstring example -> using an image where both candidate_labels=["head", "bird"] are detected instead of just "bird".
Oh, but no, `zero-shot-classification` does multi label classification actually. The default is to normalize scores across labels, but it's not mandatory, by default the models can handle multiple label just fine (results tend to be more noisy though).
https://huggingface.co/facebook/bart-large-mnli?candidateLabels=space+%26+cosmos%2C+scientific+discovery%2C+microbiology%2C+robots%2C+archeology&multiClass=true&text=A+new+model+offers+an+explanation+for+how+the+Galilean+satellites+formed+around+the+solar+system%E2%80%99s+largest+world.+Konstantin+Batygin+did+not+set+out+to+solve+one+of+the+solar+system%E2%80%99s+most+puzzling+mysteries+when+he+went+for+a+run+up+a+hill+in+Nice%2C+France.+Dr.+Batygin%2C+a+Caltech+researcher%2C+best+known+for+his+contributions+to+the+search+for+the+solar+system%E2%80%99s+missing+%E2%80%9CPlanet+Nine%2C%E2%80%9D+spotted+a+beer+bottle.+At+a+steep%2C+20+degree+grade%2C+he+wondered+why+it+wasn%E2%80%99t+rolling+down+the+hill.+He+realized+there+was+a+breeze+at+his+back+holding+the+bottle+in+place.+Then+he+had+a+thought+that+would+only+pop+into+the+mind+of+a+theoretical+astrophysicist%3A+%E2%80%9COh%21+This+is+how+Europa+formed.%E2%80%9D+Europa+is+one+of+Jupiter%E2%80%99s+four+large+Galilean+moons.+And+in+a+paper+published+Monday+in+the+Astrophysical+Journal%2C+Dr.+Batygin+and+a+co-author%2C+Alessandro+Morbidelli%2C+a+planetary+scientist+at+the+C%C3%B4te+d%E2%80%99Azur+Observatory+in+France%2C+present+a+theory+explaining+how+some+moons+form+around+gas+giants+like+Jupiter+and+Saturn%2C+suggesting+that+millimeter-sized+grains+of+hail+produced+during+the+solar+system%E2%80%99s+formation+became+trapped+around+these+massive+worlds%2C+taking+shape+one+at+a+time+into+the+potentially+habitable+moons+we+know+today. |
transformers | 20,257 | closed | Adding doctest for `image-to-text` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 09:29:10 | 11-16-2022 09:29:10 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20257). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,256 | closed | Adding doctest for `image-segmentation` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 09:16:14 | 11-16-2022 09:16:14 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20256). All of your documentation changes will be reflected on that endpoint.<|||||>Would it be possible here to add additional examples for SegFormer (semantic segmentation) and MaskFormer (panoptic segmentation)?<|||||>> Would it be possible here to add additional examples for SegFormer (semantic segmentation) and MaskFormer (panoptic segmentation)?
Would it make a difference in what the example looks like ? I don't think so... no ? |
transformers | 20,255 | closed | Bug in MobileViTForSemanticSegmentation output shape | ### System Info
Python 3.9.6
transformers==4.24.0
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import MobileViTFeatureExtractor, MobileViTForSemanticSegmentation
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/deeplabv3-mobilevit-small")
model = MobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-small")
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
predicted_mask = logits.argmax(1).squeeze(0)
```
### Expected behavior
When running inference on pretrained MobileViTForSemanticSegmentation as in [here](https://huggingface.co/apple/deeplabv3-mobilevit-small), I get very bad segmentation mask in the huggingface online inference. When running locally I see that the output mask has a shape of (1, 21, 32, 32).
I don't think outputting a 32x32 mask is intended or am I wrong? | 11-16-2022 09:10:03 | 11-16-2022 09:10:03 | Hi,
MobileViT's semantic segmentation is indeed very low-resolution, as seen in this Space: https://huggingface.co/spaces/Matthijs/mobilevit-deeplab-demo. cc @hollance
If you want to look into models that output more fine-grained segmentation results, I'd recommend checking out [SegFormer](https://huggingface.co/docs/transformers/model_doc/segformer) and [MaskFormer](https://huggingface.co/docs/transformers/model_doc/maskformer).<|||||>The original model will do a bilinear upsampling to 513x513 which makes the mask larger but also more blurry. In the Transformers library we don't do this upsampling in the model as we don't know ahead of time how you want to use the results, and we want to avoid doing multiple upsampling operations. That's why the Transformers model outputs the low resolution version.<|||||>Closing this issue as it has been resolved, feel free to reopen. |
transformers | 20,254 | closed | Adding doctest example for `image-classification` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 09:02:58 | 11-16-2022 09:02:58 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20254). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20254). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,253 | closed | Rephrasing the link. | # What does this PR do?
Fixes https://github.com/huggingface/transformers/pull/20226#discussion_r1023296368
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-16-2022 08:55:46 | 11-16-2022 08:55:46 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20253). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20253). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,252 | open | Running the run_mlm_flax on TPU v4 pods | ### System Info
transformers 4.24.0
### Who can help?
@patil-suraj
I am having problems scaling the run_mlm_flax scripts so that they run on TPU VM v4 Pods (ie the v4-16, v4-32 etc). When running "out of the box", the performance is exactly the same as when running on a v4-8. To me this indicates that I am feeding a lot of empty data. The max `per_device_train_batch_size` for 512 sequences in RoBERTa is 62 in both cases, but since the output is identical, it is obviously not scaling.
From trying to understand the code, it seems to be logical to multiply the batch size here with the `jax.process_count()` ([src example](https://huggingface.co/pere/roberta-base-exp-32B/blob/main/run_mlm_flax_stream.py#L452)). However, this does not seem to be the way to approach it.
Any ideas about how to approach this? Is the script tested on v4s?
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
See explanation above.
### Expected behavior
Expect the batch size to scale automatically. | 11-16-2022 08:34:02 | 11-16-2022 08:34:02 | Also cc @sanchit-gandhi <|||||>Hey @peregilk! Cool to see that you're using the Flax training scripts! Nice that you have TPU v4 pods as well ๐
The scripts are only tested on single TPU devices (i.e. TPU v2-8, v3-8 and v4-8), however they can be made to work in a multi-host set-up.
How are you launching the script on a TPU v4-16/32? Are you SSH'd into worker 0? You'll need to launch the same command on all 2/4 TPU workers for a v4-16/32 respectively.<|||||>Hi @sanchit-gandhi. I am running a slightly altered version of the scripts, based on the run_mlm_stream.py. I am both installing the software and starting the training simultaneously on all the TPU VMs. I am using a script Ive made ([ttconnect](https://github.com/peregilk/ttconnect)) for experiments like this.
The script runs also without any issues. Both on individual TPUs and on any sized pods. However, the result from training on a TPU v4-8 and on a TPU Pod v4-32 is **exactly** the same. Meaning the loss is the same, the training time is the same, etc. I really want the batches to scale across the pods. I am doing additional training of XLM-RoBERTa here, and it is trained with batch sizes around 3k. Then you need multiple TPUs. I want to increase batch size, not speed. My theory is that currently the batches do not span across the TPUs.
I made an attempt to simply multiplying the batch size in the script with jax.process_count(). That did not work. <|||||>Hey @peregilk,
Thanks for sharing those details. Your set-up looks good - the script you've made with `ttconnect` is super nice! The important thing is to run the same command across devices, which you are doing with that set-up.
The behaviour you have described seems to suggest that you're replicating exactly the same training across all four of your TPU devices. The batch size **should** scale with number of TPU devices to give you appropriate data parallelism:
https://github.com/huggingface/transformers/blob/4bb07647504a277398856e828fa48ddbec97678e/examples/flax/language-modeling/run_mlm_flax.py#L654
Could you verify that the number of devices is indeed 32?
```python
import jax
print(jax.device_count())
```<|||||>This is returning 16 on the v4-32. This is correct according to the [user guide](https://cloud.google.com/tpu/docs/v4-users-guide) since the v4 have 4 double chips. Could that be the cause of any problems?
Multiplying by jax.device_count() as I suggested is then definitively wrong.
FYI: I did run this code both with v3-8 and with v4-8. I then did double my **per_device_batch_size** before getting OOM errors.<|||||>Okay, this could well be part of the problem! Could you try printing out all the different calls from this sub-section of the guides on `pmap` (except `pmap`) https://jax.readthedocs.io/en/latest/jax.html#parallelization-pmap:
```python
import jax
print(jax.devices())
print(jax.local_devices())
...
print(jax.process_count())
```
Just to see what the right one is!<|||||>The TPU v4-32 returns the following
```
jax.devices():
[TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0), TpuDevice(id=1, process_index=0, coords=(1,0,0), core_on_chip=0), TpuDevice(id=2, process_index=0, coords=(0,1,0), core_on_chip=0), TpuDevice(id=3, process_index=0, coords=(1,1,0), core_on_chip=0), TpuDevice(id=4, process_index=1, coords=(0,0,1), core_on_chip=0), TpuDe
vice(id=5, process_index=1, coords=(1,0,1), core_on_chip=0), TpuDevice(id=6, process_index=1, coords=(0,1,1), core_on_chip=0), TpuDevice(id=7, process_index=1, coords=(1,1,1), core_on_chip=0), TpuDevice(id=8, process_index=2, coords=(0,0,2), core_on_chip=0), TpuDevice(id=9, process_index=2, coords=(1,0,2), core_on_chip=0), TpuDevice(i
d=10, process_index=2, coords=(0,1,2), core_on_chip=0), TpuDevice(id=11, process_index=2, coords=(1,1,2), core_on_chip=0), TpuDevice(id=12, process_index=3, coords=(0,0,3), core_on_chip=0), TpuDevice(id=13, process_index=3, coords=(1,0,3), core_on_chip=0), TpuDevice(id=14, process_index=3, coords=(0,1,3), core_on_chip=0), TpuDevice(id=15, process_index=3, coords=(1,1,3), core_on_chip=0)]
jax.local_devices():
[TpuDevice(id=12, process_index=3, coords=(0,0,3), core_on_chip=0), TpuDevice(id=13, process_index=3, coords=(1,0,3), core_on_chip=0), TpuDevice(id=14, process_index=3, coords=(0,1,3), core_on_chip=0), TpuDevice(id=15, process_index=3, coords=(1,1,3), core_on_chip=0)]
jax.process_index():
worker-0: 1
worker-1: 2
worker-2: 3
worker-3: 4
jax.device_count():
16
jax.local_device_count():
4
jax.process_count():
4
```
The TPU v4-8 returns the following:
```
jax.devices():
[TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0), TpuDevice(id=1, process_index=0, coords=(1,0,0), core_on_chip=0), TpuDevice(id=2, process_index=0, coords=(0,1,0), core_on_chip=0), TpuDevice(id=3, process_index=0, coords=(1,1,0), core_on_chip=0)]
jax.local_devices():
[TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0), TpuDevice(id=1, process_index=0, coords=(1,0,0), core_on_chip=0), TpuDevice(id=2, process_index=0, coords=(0,1,0), core_on_chip=0), TpuDevice(id=3, process_index=0, coords=(1,1,0), core_on_chip=0)]
jax.process_index():
0
jax.device_count():
4
jax.local_device_count():
4
jax_process_count():
1
```
More info:
```
jax.print_environment_info()
jax: 0.3.23
jaxlib: 0.3.22
numpy: 1.22.4
python: 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jax.devices (16 total, 4 local): [TpuDevice(id=0, process_index=0, coords=(0,0,0), core_on_chip=0) TpuDevice(id=1, process_index=0, coords=(1,0,0), core_on_chip=0) ... TpuDevice(id=14, process_index=3, coords=(0,1,3), core_on_chip=0) TpuDevice(id=15, process_index=3, coords=(1,1,3), core_on_chip=0)]
process_count: 4
```
<|||||>@sanchit-gandhi I found something very interesting that might be the source of most of my confusion here. When inserting a breakpoint into my code here: [breakpoint](https://huggingface.co/pere/roberta-debug-32/blob/main/run_mlm_flax_stream.py#L454)
I notice that the value of jax.device_count() actually is 4(!!), and the jax_process_count() returns 1. Starting python from the command line, importing jax, and then printing the same "jax.device_count()", the value is 16.
I do not have time to dig more into this right now. Just thought that I should mention this in case you decide to look more into this.
<|||||>@sanchit-gandhi I think I have been able to isolate the problem. This can be run directly from the command line on a v4-32:
```python
>>> import jax
>>> jax.device_count()
16
```
However, importing `TrainingArguments` seem to change the number of visible devices:
```python
>>> import jax
>>> from transformers import TrainingArguments
>>> jax.device_count()
4
```
I can not see why this should happen. I also see the following error that might give a hint about what is going on:
```python
>>> import jax
>>> jax.device_count()
16
>>> from transformers import TrainingArguments
[percpu.cc : 557] RAW: rseq syscall failed with errno 22
>>> from transformers import TrainingArguments
>>> jax.device_count()
16
```
<|||||>Great job at tracing it to a JAX-Transformers interaction! That's super weird - does this happen with just `TrainingArguments`, or other Transformers modules too (i.e. `AutoConfig`)? Does swapping the order of your imports change the behaviour?
```python
>>> from transformers import TrainingArguments
>>> import jax
>>> jax.device_count()
```
(we need to get a TPU v4 to test these issues!)<|||||>Seem to be a bit of a Schrรถdinger's cat-problem. Whether you look at it determines if it is dead...;) "Looking" at jax.device_count() (that probably activates the device) seems to let you import TrainingArguments without breaking the pods.
Switching transformers and jax imports does not help. It still reports 4.
I think I have tried all the other transformer modules, and I have not been able to reproduce this with any of them.
<|||||>I'm not able to reproduce this. Running on a v4-16:
```python
In [1]: import jax
In [2]: from transformers import TrainingArguments
In [3]: jax.device_count()
Out[3]: 8
```
(v4-16 = 8 chips = 8 jax devices)
@peregilk can you share your jax, jaxlib, libtpu-nightly, and transformers versions? Also make sure you're creating the TPUv4 with `--version=tpu-vm-v4-base`
<|||||>Thanks @skye.
For reference, in the reported error I was using `--runtime-version=v2-alpha-tpuv4-pod` with the following libraries.
```
jax: 0.3.23
jaxlib: 0.3.22
libtpu-nightly: 0.1.dev20221109
transformers: 4.24.0
```
Not reported above, when debugging I actually also tried using `--runtime-version=tpu-vm-v4-base` but did get:
```python
In [1]: import jax
In [2]: jax.device_count()
Out[2]: 4
```
I might have done a mistake when creating this pod. I will try from scratch again using `--runtime-version=tpu-vm-v4-base`.
Thanks.
<|||||>Thank you @skye! ๐<|||||>Ah yeah, `v2-alpha-tpuv4-pod` confusingly was only for running TF on a pod slice, and would prevent jax from running across the slice. So that explains it. You should always use `tpu-vm-v4-base` with jax now (or `tpu-vm-base` for v2 and v3).
You can always check the Cloud TPU docs for the latest gcloud commands (I like https://cloud.google.com/tpu/docs/run-calculation-jax and https://cloud.google.com/tpu/docs/jax-pods). I understand it's hard to know when things change; hopefully they won't change very frequently moving forward :)<|||||>Thanks a lot @skye! I can now see the devices after loading Transformers. I have also verified that is calculates the batch size correctly:
`train_batch_size = int(training_args.per_device_train_batch_size) * jax.device_count()`
With `per_device_train_batch_size=62` on a v4-8, this means `batch_size=248`. This runs on the single TPU.
On a v4-32 this becomes `batch_size=992`. Here I am still getting OOM-errors. I also reduced the batch size but I still get OOM errors.
Are there any other changes that needs to be done here?
<|||||>I'm not very familiar with using `transformers`, but you may need to use `jax.local_device_count()` instead of `jax.device_count()` somewhere? See https://jax.readthedocs.io/en/latest/multi_process.html. Let me know if you still have questions, this can be tricky!<|||||>Thanks a lot @skye and @sanchit-gandhi for assisting in this. Really useful comments. It seems like splitting between the nodes simply isnt implemented in the code I am using. @agemagician actually implemented this in [pull #16527](https://github.com/huggingface/transformers/pull/16527) but it is only added to [run_mlm_flax_t5.py](https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_t5_mlm_flax.py). It is not implemented for the other [run_mlm-scripts](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling) and not in [run_mlm_flax_streaming.py](https://github.com/huggingface/transformers/blob/main/examples/research_projects/jax-projects/dataset-streaming/run_mlm_flax_stream.py) that is the one I am using.
I can make a pull request to the other scripts, basically doing [this change](https://github.com/huggingface/transformers/pull/16527/commits/32d450552dd51729de1af336ab9705d9ca4df9e5). However, there is one remaining issue that needs to be resolved first.
For me (at least when I am using the streaming script), this turns out being extremely slow on the pods. Here is a speed comparison. All running `seq_length=512` and `per_device_train_batch_size=56`.
| device | batch_size | seconds per iteration |
|--------|------------|-----------------------|
| v4-8 | 224 | 1 |
| v4-64 | 1792 | 32 |
| v4-128 | 3584 | 220 |
Currently this is way too slow to do real training. I have not been able to test this on the non-streaming scripts, and have not done any attempts at trying to understand where the slowdown is. Maybe any of you have theories about what could be wrong here? It is also worth noting that starting up training (initialising from a pretrained checkpoint) typically takes 4-5 hours (same time for both single TPUs and pods). This is however not a showstopper for doing pretraining.
<|||||>@sanchit-gandhi I have not been able to fix this yet, but I think that I at least have been able to pin down the bottleneck here.
[This iteration](https://github.com/huggingface/transformers/blob/61d3928bfb3029bceb5be3e68ca3d4bf8456758f/examples/research_projects/jax-projects/dataset-streaming/run_mlm_flax_stream.py#L292-L294) is **extremely slow**. The entire iteration takes a couple of minutes per training step. Not sure why it is so slow though, and I do not see why "id" and "text" are excluded here. The grouping is [done differently](https://github.com/huggingface/transformers/blob/61d3928bfb3029bceb5be3e68ca3d4bf8456758f/examples/flax/language-modeling/run_mlm_flax.py#L580-L593) in the non-streaming dataset and these scripts seem to run a lot faster.
This actually also turns out to be the reason for the long startup time. The entire evaluation set is pre-tokenized and grouped, and then iterated over. With 50k steps in the evaluation set, this takes several hours. When reducing the eval set to just a few samples, the startup is almost instant.
<|||||>@sanchit-gandhi: I now have a working version that runs decently fast on the pods! I am down from 220 sec/it to around 10s/it on a v4-128.
I made the following change to the streaming code:
```python
# samples = {
# k: samples[k] + tokenized_samples[k] for k in ["input_ids", "attention_mask", "special_tokens_mask"]
# }
samples["input_ids"] += tokenized_samples["input_ids"]
samples["attention_mask"] += tokenized_samples["attention_mask"]
samples["special_tokens_mask"] += tokenized_samples["special_tokens_mask"]
```
For some reason this is a lot faster, and fast enough to be "useful". I still do not think this is optimal though. Tokenising and grouping is still slowing down the training considerably when you are using a streaming dataset.
<|||||>I'm guessing using `+=` is a lot faster because Python is smart enough to extend the `samples` lists in-place, whereas the original implementation will end up completely rewriting each list. If that's right, I think using `+=` is the best you can do short of multi-threading (I'm not a Python performance expert though).<|||||>There are a few other things in the script that seem suboptimal. For instance are the tokenization not split across the VMs.
@skye: Do you have an estimate of what performance that should ideally be expected here? Lets say one training step takes 1 second on a v4-8. How long should it take to run it on a v4-128? I guess there are some overhead in dividing the job across the TPUs, right? Just looking for an estimate on how much the current performance depends on the CPUs.
<|||||>Hey @peregilk! Sorry for the delayed response.
We can't use multiple processes with Datasets' map method when using a streaming dataset. This is because we read the raw dataset's tar file and iterate over the bytes incrementally, iterating over the dataset samples and loading them into memory under a single file at a time. This is why we don't pass the `num_proc` arg to `.map` when tokenising the dataset.
If your dataset is small, it might be worth downloading the dataset, pre-processing it and saving it to cache (all done under the hood by Datasets for a non-streaming dataset)? Otherwise this is part of the trade-off for using streaming datasets! We have no disk space constraints but have to load data on the fly.<|||||>Thanks @sanchit-gandhi. In my case, storing the dataset locally is not an option. I would then have to attach a disk to each of the pods, and for the large pods that is not an option.
I understand the samples needs to be tokenized before it is possible to shard them across the tpus, and I also understand that this in reality needs to be done on a single TPU VM. However, I still see more than 10 seconds per step here - it just seems to be a lot.
Do you know if it is possible to pre-tokenize (or even pre-shard) a dataset and keep it streaming? Is it worth looking into that, or do you think it is better looking closer into what is taking time here?
Each TPU VM is quite a capable machine (200 CPU cores). Even if it is hard to split this over multiple VMs, are there better ways of using the VM that need to do the processing?
<|||||>> Do you have an estimate of what performance that should ideally be expected here? Lets say one training step takes 1 second on a v4-8. How long should it take to run it on a v4-128?
Sorry missed this earlier. Not sure it's still useful, but for batch parallelism, you should expect near linear scaling if you keep the per-device batch size the same. I.e. if you increase the global batch size 16-fold going from v4-8 -> v4-128, the step time should remain constant. If you keep the global batch size the same (i.e. decrease the per-device batch size as you increase devices), the speedup should be roughly linear until you reach a certain minimum per-device batch size.<|||||>Thanks a lot @skye. Great to get this confirmed. Basically the script today runs 10X slower than it potentially should. Or....put another way... 90% of the time is used for preparing the dataset and 10% is used efficiently for training.
If I understand correctly, @sanchit-gandhi, there will soon be a flax implementation for Whisper with the streaming dataset. I will test this as well, and see if I get the same issues here.
I have a few ideas about how to figure out what is really going on here, and I will start looking into this more thoroughly early next year.
Hope it is OK that I am also tagging @lhoestq .
<|||||>You can use something like a torch DataLoader with num_workers > 0 with your streaming dataset. This way you load and collate the data in parallel to your forward and backward passes.<|||||>Thanks a lot @lhoestq. If I understand correctly, the way this works on streaming datasets is that the DataLoader is starting a worker for each of the dataset shards. So if you have the compute capacity, the optimal setting is `num_workers=dataset.n_shards` (With my test dataset this is 85).
I tried implementing this like:
```python
# Replace
# training_iter = iter(tokenized_datasets)
training_iter = iter(torch.utils.data.DataLoader(tokenized_datasets.with_format("torch"), batch_size=1, shuffle=False, num_workers=dataset.n_shards, collate_fn=lambda x: x))
````
My reference is **1 sek/iteration on a v4-8**. According to @skye this should continue to be **1 sek/iteration on a v4-128** with my setup. As shown above, I started at **220 sek/iteration on a v4-128**. Before the suggestion from @lhoestq, I was down to **11 sek/iteration**. After adding the Torch DataLoader this is reduced to **5 sek/iteration**.
Even if things are looking way better, I still think this can be improved further. I took a look at the load of the VMs CPUs, and the load is still very low: Approx 10% with some very short peaks. All cores are used.
I am willing to share what I have so far here. @patrickvonplaten: Are you interested in merging the support for the tpu v4-pods into [run_mlm_flax_stream.py](https://github.com/huggingface/transformers/blob/main/examples/research_projects/jax-projects/dataset-streaming/run_mlm_flax_stream.py)? Maybe others can contribute and improve on this as well?
<|||||>Sorry for dropping the ball here @peregilk! I understand that your MLM experiments are working quite well currently?
> Are you interested in merging the support for the tpu v4-pods into [run_mlm_flax_stream.py](https://github.com/huggingface/transformers/blob/main/examples/research_projects/jax-projects/dataset-streaming/run_mlm_flax_stream.py)? Maybe others can contribute and improve on this as well?
This would be super! We could start with your working MLM streaming script? Feel free to open a PR on transformers if you're interested and tag me ๐ค happy to iterate with you here!<|||||>Yes, @sanchit-gandhi, the training is running at acceptable speed. I am currently training some larger models. When I get the results from these, and are certain that everything really works, Ill open a PR. <|||||>Keeping alive. I will do this together with the Whisper pod support.<|||||>Kinda unrelated to the issue, since I was working on diffuser model instead. But I noticed some oddity.
At the moment, it seems like the solutions for tpu pod/multiple process is to divide global batch into local batch corresponding to the process ( #16527 ). That would mean a single dataloader for all process, if dataloading is kept behind a process index check. Otherwise, it means the exact same dataloader on all process, each loading a global batch and discarding non-local data.
Wouldn't it be better for each process to have its own dataloader, streaming from a list of pre-divided datasets?
Instead of having one dataset (hf streaming dataset from the datasets library), I tested splitting my dataset into multiple ones of exact size under different index. It seemed to allow faster data loading. I have yet test it on pod environment though. The reasoning behind splitting into multiple dataset is from experience working with tfrecord, which recommends multiple smaller far instead of a massive file as hf datasets currently do with tar streaming.<|||||>With HF `datasets` library you can already [split_dataset_by_node](https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/main_classes#datasets.distributed.split_dataset_by_node):
```python
from datasets.distributed import split_dataset_by_node
ds = split_dataset_by_node(ds, rank=rank, world_size=world_size)
```
this works for regular (="map-style") and iterable datasets (e.g. when streaming).
From the documentation:
> For map-style datasets:
>
> Each node is assigned a chunk of data, e.g. rank 0 is given the first chunk of the dataset. To maximize data loading throughput, chunks are made of contiguous data on disk if possible.
>
> For iterable datasets:
>
> If the dataset has a number of shards that is a factor of world_size (i.e. if dataset.n_shards % world_size == 0), then the shards are evenly assigned across the nodes, which is the most optimized. Otherwise, each node keeps 1 example out of world_size, skipping the other examples.<|||||>@Lime-Cakes
Thanks a lot @lhoestq. This turned out to be the way for making this run fast on the tpu-v4-pods. The two tricks seems to be using the `split_dataset_by_node`, and then using the `torch.utils.data.DataLoader` with a high (30+) number of workers.
I have mainly been working on getting this to run for Whisper lately. I now have a training script here that I am willing to submit. @sanchit-gandhi, please advice, and I will open a pull request. Implementation for `run_mlm_flax_streaming.py` should be possible to do the same way.
Attaching a graph showing how it scales between v4-8, v4-16 and v4-32. It also shows how defining too few workers will drastically reduce the speed. The scaling is now very close to linear as @skye commented on earlier.
<img width="1095" alt="image" src="https://user-images.githubusercontent.com/9079808/228343414-a6b1f97d-de81-42e5-bc84-40adcf8ebdfa.png">
To be able to get "enough" workers on the larger pods, the dataset also needs to have a lot of shards. In this example I used 256 shards, giving a maximum of 64 shards on each of the VMs on a v4-32. <|||||>Updating this post. The Whisper Tiny model seem to scale almost perfectly here. I am able to use the pods both for increasing batch size and for increasing speed.
However, for some very strange reason, I am unable to do this for the larger Whisper models. All of them seem to train great for the first few steps, then they simply freezes. I have spent a lot of time debugging this, and is a bit lost at the moment. It does however seem to be related to updating the model state, and not related to the dataset loading.
The training script is available here: [https://github.com/NbAiLab/nb-whisper/blob/main/run_flax_speech_recognition_seq2seq_streaming.py](https://github.com/NbAiLab/nb-whisper/blob/main/run_flax_speech_recognition_seq2seq_streaming.py)
However, we probably need to iron out this bug before making a pull request.
@sanchit-gandhi : I can set up a minimum example for reproducing the bug. Please let me know.
<|||||>We now finally have a working training script for Flax Whisper! It uses dataset streaming and runs really fast on TPU pods. It also runs on single TPUs and GPUs. We are making some final modifications and cleanups, and have agreed with @sanchit-gandhi to make a review before making a pull request in a few days.
If anyone following this thread have access to TPUs and want to train Whisper, please notify me. We would really like to also implement [gradient checkpointing](https://github.com/cybertronai/gradient-checkpointing#saving-memory-using-gradient-checkpointing) to boost boost batch size on the large models. However, we do not have the capacity or knowledge to implement it ourselves, but we would be happy to contribute by testing it if anyone has the capacity.
|
transformers | 20,251 | closed | Consistently getting lower log probabilities for more probable sequences | ### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): 2.9.2 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patrickvonplaten @gante @nars
### Information
- [X] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I have been using OPT models (125m~66b) to score sequences (see #20008) from a set of multiple choice problems (8 choices). I found that, on average, the wrong sequences are ~10% (slightly below 1/8) likely to have the maximum probability (among the 8 choices), but the correct sequence is ~1% likely to have the maximum probability. GPT-3 displays the exact opposite (expected) behaviour. My current hypotheses are
1. I calculated log probabilities incorrectly; see `score` function below. Although I think we figured it out here #20008, with the help of @gante.
2. `model(input).logits` might have returned the wrong thing.
To reproduce the error (not on my specific task but in general):
1. Define
```python
model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b",
device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False)
def score(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
input_tokens = [tokenizer.decode(id) for id in input_ids[0]]
input_logprobs = []
logits = model(input_ids).logits
all_tokens_logprobs = log_softmax(logits.double(), dim=2)
for k in range(input_ids.shape[1]):
input_logprobs.append(all_tokens_logprobs[:,k,input_ids[0,k]])
input_logprobs = [input_logprobs[k].detach().numpy()[0] for k in range(len(input_logprobs))]
return input_tokens, input_logprobs
def display(prompt):
input_tokens, input_logprobs = score(prompt)
out_str = ""
for i in range(len(input_logprobs)):
out_str = out_str + str(input_tokens[i]) + ": " + str(input_logprobs[i]) + " "
print(out_str)
```
2. Run `display` on pairs of prompts and compare probabilities. For example:
- Run `display("he works at the university as a professor")`
- Output: `
</s>: -9.622528113218403 he: -3.4190597141055896 works: -11.342809676330889 at: -7.323995369804066 the: -7.797857269329157 university: -8.355341028220346 as: -8.410616662546053 a: -7.980906483021017 professor: -10.740861657554035 `
- Run `display("he works at the university as a clown")`
- Output: `</s>: -9.584896765616625 he: -3.4396850232969243 works: -11.341423123411294 at: -7.338279654205086 the: -7.808466620856404 university: -8.343803691603435 as: -8.401746292912188 a: -7.94869412338276 clown: -6.500715861445045`
- Note that the `LogProb("professor" | "he works at the university as a") < LogProb("clown" | "he works at the university as a")`.
3. Additional examples:
- `display("he could not play due to injury")`
- `</s>: -9.609577798790037 he: -3.4115336573055703 could: -9.250618499206704 not: -6.846035151696857 play: -9.919994452618948 due: -9.421663548632559 to: -8.411191754931467 injury: -9.714442198832101 `
- `display("he could not play due to apple")`
- `</s>: -9.595368031203268 he: -3.402465835779653 could: -9.260463627711584 not: -6.82763851395037 play: -9.899493418563107 due: -9.375417731887582 to: -8.429231628522128 apple: -7.371283146690385
`
- `display("Q: who makes the iphone? A: Apple")`
- `</s>: -9.735390750532165 Q: -4.982232076966033 :: -11.777887545688749 who: -9.614314712501576 makes: -13.181075048715304 the: -7.544189630785205 : -4.4103913220035835 iph: -14.036189109433794 one: -12.96041481602171 ?: -12.274218276071288 A: -10.156031264618123 :: -12.576397941389803 Apple: -9.243583670812342 `
- `display("Q: who makes the iphone? A: Banana")`
- `</s>: -9.749204391127897 Q: -4.9924473604492725 :: -11.771810712752153 who: -9.594715884838685 makes: -13.212738897899317 the: -7.47622315378366 : -4.443921992316447 iph: -14.110338198583378 one: -12.932230614752198 ?: -12.295626501904344 A: -10.15586013628168 :: -12.57128175779327 Banana: -7.862753006007329 `
### Expected behavior
Should be getting higher log probabilities for more probable sequences. For example, it should be the case that `LogProb("professor" | "he works at the university as a") > LogProb("clown" | "he works at the university as a")`.
There also appears to be very few tokens with a high (close to 0) probability. It is also weird to me that the first token (after `</s>`) seems to have a very high probability.
Basically the model displays the exact opposite of the expected behaviors. This has truly been puzzling. Any input is appreciated! | 11-16-2022 05:02:24 | 11-16-2022 05:02:24 | Hey @xiaoyangnickhu ๐
I believe your snippet has an issue with indexing when appending to `input_logprobs`. `all_tokens_logprobs` contains the log probs for the NEXT token, not for the current token. i.e. accessing `all_tokens_logprobs[batch_idx, 0, token]` has the log probs for the token at position `1`, not for the token at position `0`.
Factoring that in, you get the outcomes that you are expecting. For instance, in the first example, `professor` is the most likely token for the last slot ๐ค
_______________________________________________
Snippet with the indexing correction:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from torch.nn.functional import log_softmax
model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b",
device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False)
def score(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
input_tokens = [tokenizer.decode(id) for id in input_ids[0]]
input_logprobs = []
logits = model(input_ids).logits
all_tokens_logprobs = log_softmax(logits.double(), dim=2)
for k in range(1, input_ids.shape[1]):
input_logprobs.append(all_tokens_logprobs[:, k-1, input_ids[0,k]])
input_logprobs = [input_logprobs[k].detach().numpy()[0] for k in range(len(input_logprobs))]
return input_tokens, input_logprobs
def display(prompt):
input_tokens, input_logprobs = score(prompt)
out_str = ""
for i in range(len(input_logprobs)):
out_str = out_str + str(input_tokens[i+1]) + ": " + str(input_logprobs[i]) + " "
print(out_str)
```<|||||>The fix works. Thanks! |
transformers | 20,250 | closed | All Flan-T5 models configs use the incorrect activation function | ### System Info
The [configs](https://huggingface.co/google/flan-t5-xxl/blob/main/config.json) for all of the Flan-T5 says that the activation function is 'gelu' and yet 'is_gated_act' is set to true. This is an inherent contradiction.
Doing more digging, I realized that per [Google's original Flan-T5 checkpoints ](https://github.com/google-research/t5x/blob/main/docs/models.md#t5-11-lm-adapted-checkpoints), Flan-T5 is directly instantiated from T5v1.1 LM-adapt, which all use [gated-gelu](https://huggingface.co/google/t5-xxl-lm-adapt/blob/main/config.json)
### Who can help?
@younesbelkada @arthur
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Compare the T5v1.1+LM configs to the Flan-T5 configs.
### Expected behavior
"feed_forward_proj" should be "gated-gelu" and "dense_act_fn" is redundant and should be removed entirely from the config. | 11-16-2022 03:06:04 | 11-16-2022 03:06:04 | Hi @michaelroyzen
Thanks for raising this.
You are right, one should use `gated-gelu` as it is done in t5 LM-adapt checkpoints. We have updated with @ArthurZucker the config files of flan-T5 models.
Note that forcing `is_gated_act` to `True` leads to using gated activation function too. The only difference between these 2 approaches is that using `gated-gelu` forces the model to use `gelu-new` activation function. See [this line](https://github.com/huggingface/transformers/blob/a00b7e85ea4b3e3185440f1f82a6b58e3660b01d/src/transformers/models/t5/configuration_t5.py#L132). `gelu-new` gives slightly different results than `gelu` but does not affect the overall performance of the model.
This also is not a breaking change from what I can see, as it affects only inference of the model. If someone has fine-tuned flan-T5 with `gelu` should not be affected by this change.
Closing this issue as the config files have been fixed. Thanks for your help!
cc @sgugger<|||||>Thank you @younesbelkada. May I ask how this only affects inference? If a flan-t5 model was fine-tuned with the old checkpoint, wouldn't the wrong activations be used?<|||||>> Note that forcing `is_gated_act` to `True` leads to using gated activation function too.
@younesbelkada From this line of code, it seems that is_gated_act will be set to false without gated-relu? https://github.com/huggingface/transformers/blob/a00b7e85ea4b3e3185440f1f82a6b58e3660b01d/src/transformers/models/t5/configuration_t5.py#L122<|||||>@LiJunnan1992
No, it gets overriden by the kwargs [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/configuration_t5.py#L135), check this snippet:
```
from transformers import T5Config
config_gated = T5Config(is_gated_act=True, hidden_act="gelu")
print(config_gated.is_gated_act)
>>> True
config_gated = T5Config(hidden_act="gelu")
print(config_gated.is_gated_act)
>>> False
config_gated = T5Config(feed_forward_proj="gated-gelu")
print(config_gated.is_gated_act)
>>> True
```<|||||>@younesbelkada I see. Thanks so much for the explanation! |
transformers | 20,249 | closed | Support X | Y syntax on HfArgumentParser | ### Feature request
[PEP-604](https://peps.python.org/pep-0604/) created the X | Y syntax on python 3.10, which is equivalent to Union[X, Y]. The use of this syntax is not supported by HfArgumentParser.
### Motivation
With this syntax I would like to use something like:
```
@dataclass
class ModelArguments:
some_argument: str | None = field(
default=None,
metadata={"help": "some argument"},
)
```
Instead of:
```
@dataclass
class ModelArguments:
some_argument: Optional[str] = field(
default=None,
metadata={"help": "some argument"},
)
```
When trying to use the first one, it throws an error:
```
Traceback (most recent call last):
File "/home/jcanete/new-kd/kd/train.py", line 299, in <module>
main()
File "/home/jcanete/new-kd/kd/train.py", line 160, in main
parser = HfArgumentParser(
File "/home/jcanete/anaconda3/envs/venv/lib/python3.10/site-packages/transformers/hf_argparser.py", line 73, in __init__
self._add_dataclass_arguments(dtype)
File "/home/jcanete/anaconda3/envs/venv/lib/python3.10/site-packages/transformers/hf_argparser.py", line 178, in _add_dataclass_arguments
self._parse_dataclass_field(parser, field)
File "/home/jcanete/anaconda3/envs/venv/lib/python3.10/site-packages/transformers/hf_argparser.py", line 149, in _parse_dataclass_field
parser.add_argument(field_name, **kwargs)
File "/home/jcanete/anaconda3/envs/venv/lib/python3.10/argparse.py", line 1427, in add_argument
raise ValueError('%r is not callable' % (type_func,))
ValueError: str | None is not callable
```
### Your contribution
Not sure if the best solution but changing [line 88 of hf_argparser.py](https://github.com/huggingface/transformers/blob/main/src/transformers/hf_argparser.py#L88) from:
`if origin_type is Union:`
to
`if origin_type is Union or type(origin_type) is UnionType:`
Does the trick on my local installation.
(it also requires to add the import of: `from types import UnionType`).
| 11-16-2022 00:20:34 | 11-16-2022 00:20:34 | Looks like adding support while not breaking previous Python version will be tricky, as `from types import UnionType` only work for Python 3.10 and above. We can look at a PR if you want to try a contribution, but I don't think we will add this ourselves until Python 3.10 is more widely supported (PyTorch and TensorFlow do not support Python 3.10 for instance).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Ran into the same issue today. Any plan to support union-type annotations (`X | Y`)?
Now, Python 3.10 was released 1.5 years ago. It is widely used and has become the default Python version for `conda`. Also, if users have `from __future__ import annotations` in their scripts, some automation tools, such as `pyupgrade` / `ruff`, will automatically rewrite the type annotations (`Union[X, Y] -> X | Y`, `Optional[X] -> X | None`). |
transformers | 20,248 | closed | Sharded T5X checkpoints can't be converted to pytorch ? | ### System Info
- `transformers` version: 4.21.1
- Platform: Linux-5.4.0-92-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@patrickvonplaten
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. Train using t5x to get a checkpoint that's bigger than 10GB.
2. use official conversion script, e.g.,
```
python transformers/models/t5/convert_t5x_checkpoint_to_flax.py
--t5x_checkpoint_path checkpoint_100000/
--config_name google/t5-v1_1-xxl
--flax_dump_folder_path checkpoint_converted
```
this yields, e.g.,
```
$ ls checkpoint_converted
config.json
flax_model-00001-of-00005.msgpack
flax_model-00002-of-00005.msgpack
flax_model-00003-of-00005.msgpack
flax_model-00004-of-00005.msgpack
flax_model-00005-of-00005.msgpack
flax_model.msgpack.index.json
```
3. you can load like this:
```
tokenizer = T5Tokenizer.from_pretrained("google/t5-v1_1-xxl")
flax_model = FlaxT5ForConditionalGeneration.from_pretrained("checkpoint_converted/")
```
but you can't load like this:
```
model = T5ForConditionalGeneration.from_pretrained("checkpoint_converted/", from_flax=True)
```
because of an error:
```
OSError: Error no file named pytorch_model.bin, tf_model.h5, model.ckpt.index or flax_model.msgpack found in directory
```
### Expected behavior
I think it would be nice to be able to load sharded flax checkpoints using the more generic class: this would be useful, e.g., for converting big flax checkpoints to pytorch. The machinery for loading appears to be mostly implemented, but it isn't yet connected to the `T5ForConditionalGeneration.from_pretrained` method. I can work on a PR if all this checks out, but wanted to see if there was something I was missing.
Relevant parts of the code:
checking for the flax sharded file type:
https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/modeling_utils.py#L1997-L2029
https://github.com/huggingface/transformers/blob/a44985b41cfa2de48a5e1de7f1f93b7483da25d1/src/transformers/modeling_flax_utils.py#L659-L685
support for sharded pytorch --> flax (but not vice-versa):
https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_pytorch_utils.py
function that loads sharded flax checkpoints:
https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_utils.py#L424-L468
function that might need to be modified to detect a sharded checkpoint, and then call the above:
https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_pytorch_utils.py#L224-L239 | 11-16-2022 00:19:12 | 11-16-2022 00:19:12 | Hi @jmhessel Flax is in maintenance mode only, so it's very unlikely we will add support to this ourselves. Feel free to open a PR if you want however!<|||||>gotcha, thank you! I will take a look<|||||>Actually this is relatively important I think since the main way to pretrain T5 is via Google's T5X repo so having a good Flax->PyTorch conversion in our side I think is not super unimportant.
It's likely that more T5X checkpoints will come out and it'd be nice to be able to directly convert them to PyTorch.
Maybe gently pinging the author of the conversion script: @stefan-it (in case you have any ideas) and @younesbelkada and @ArthurZucker in case you can find some time to help @jmhessel :-) <|||||>thanks @sgugger and @patrickvonplaten ! :-)
I am not sure I will get a chance to look at this this week, but --- I will mention one quick solution suggested by @peterwestuw --- if one simply increases the shard size here:
https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_utils.py#L950-L952
then it might be possible to save all weights in a single file and use the generic `from_pretrained` once again. Not the cleanest solution b/c it would be ideal to support sharded loading, but wanted to mention.<|||||>This would require a machine with lots of RAM to accommodate the checkpoint.<|||||>update: Increasing the shard size worked for me! But yes, I needed to grab a large RAM instance to do the conversion. Not a bad stopgap. Given that this solved my issue, I might not be able to look at this in the next few weeks, but I'll leave the issue open for now in case I come back to it<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,247 | closed | Saving a 8-bit T5 does not work. | ### System Info
Latest.
### Who can help?
@patrickvonplaten
I am able to load a T5 model in 8-bit-format using this command:
`model = T5ForConditionalGeneration.from_pretrained(".",load_in_8bit=True, device_map="auto")`
The model works fine. Saving the model using ".save_pretrained()" also seems to work, and the file on the disk is a lot smaller. I am able to load the save model again without errors, but then the model no longer works.
Here are two sample models:
north/fine_North_large
north/fine_North_large_8bit
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
See description above
### Expected behavior
The model should not change when being saved. | 11-15-2022 21:29:16 | 11-15-2022 21:29:16 | Those are all arguments for inference specifically. `save_pretrained` does not work if you use `device_map="auto"` if you have weights offloaded on the CPU (not sure if it's the case or not here) and it's not been tested with `load_in_8bit` either.
We will probably fix this later down the road, but for now you should not save on disk the model this way (you can find the model in your cache if you need its weights anyway)<|||||>Thanks for the feedback, @sgugger.
So, just to be sure that I understand this: There is not really any way of easily distributing only the 8-bit model. The correct way of doing it will be to distribute the 32-bit version, and then ask people to use **load_in_8bit=True, device_map="auto"** ?
What about loading the 8-bit-version in the HuggingFace widgets. Is that even possible?<|||||>There is no way to save and reload an 8-nit model anyway (correct me if I'm wrong @younesbelkada ) so you need to use `load_in_8bit=True, device_map="auto"` indeed.<|||||>Yes this is correct, you can also save the model in fp16 to save memory and load it back in int8, but currently saving and loading the model in 8-bit is not supported |
transformers | 20,246 | closed | Add Galactica model | ### Model description
Galactica is a large language model for science.
Can summarize academic literature, solve math problems, generate Wiki articles, write scientific code, annotate molecules and proteins.
Web: galactica.org
Code: https://github.com/paperswithcode/galai
PS: It seems to use OPT architecture so maybe we can reuse the code used for adding these model. I would like to add this model :)
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | 11-15-2022 21:28:16 | 11-15-2022 21:28:16 | If the architecture is the same, it's just a matter of converting the checkpoints to the HF format and uploading them, no need to add a new model in the library :-)<|||||>Hi, Sylvain!
Yes, It is the same (Even I have run the forward pass with their wrapper and the HF `OPTForCausalLM` and the output is the same. I think the ckpts are already converted (They have a config file that makes them work with HF).
2 questions:
1) The tokenizer is read from a single file `tokenizer.json` with the `Tokenizer.from_file()` method. How do I split it into separated files? (config, merges, vocab, etc)
2) Do I upload the models with my personal account? Like this: https://huggingface.co/mrm8488/galactica-125m ?
Thanks!<|||||>Not working correctly right now.<|||||>@sgugger Can you guide us better?<|||||>@mrm8488 would probably be best to include under the `facebook` org, some folks at HF like @patrickvonplaten should be able to upload them there<|||||>@patrickvonplaten We need your help on this one.<|||||>Hey guys, stoked for the integration - just one model specific thing, if user enters prompt like:
"[START_AMINO]MVATE[END_AMINO]" -> need to tokenize so the bit inside the special tokens is character-based tokenized -> i.e M, V, A, T, E. we have the logic in https://github.com/paperswithcode/galai/blob/main/galai/model.py, see "escape_custom_split_sequence".<|||||>Any update?<|||||>> Any update?
It is WIP! Thanks! :)<|||||>Model is fully added see: https://huggingface.co/models?other=galactica - thanks a lot @mrm8488 for you work here! Closing this :-) |
transformers | 20,245 | closed | Add Spanish translation of serialization.mdx | # What does this PR do?
Add the Spanish translation for `serialization.mdx` as part of the #15947 issue.
Changes include the Spanish version of the original document and the updated `_toctree.yml` file.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://github.com/huggingface/transformers/issues/15947#issuecomment-1312741421)**.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. | 11-15-2022 21:26:45 | 11-15-2022 21:26:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@omarespejel or @osanseviero, can you help me review this PR? Thanks!<|||||>@osanseviero, I addressed your comments in my last commit. Thanks for your review. I'll consider your feedback for future translations.<|||||>@sgugger, review done :) Please merge when possible. Thanks! |
transformers | 20,244 | closed | Data collator for token classification pads labels column when receives pytorch tensors | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes DataCollatorForTokenClassification fails when trying to collate examples with "labels" column of pytorch tensors.
I faced this issue yesterday and checked that it can be solved easily, so I did not open any Issue on github.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-15-2022 20:38:01 | 11-15-2022 20:38:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20244). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20244). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,243 | closed | DataCollatorForTokenClassification pads label column when working with torch tensors | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-15-2022 19:58:31 | 11-15-2022 19:58:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20243). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,242 | closed | Update reqs to include min gather_for_metrics Accelerate version | # What does this PR do?
Update all the PyTorch examples using `accelerate` and `gather_for_metrics` to include a minimum accelerate version of 0.12.0 since this introduced `gather_for_metrics`
Related to https://github.com/huggingface/accelerate/issues/854
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
| 11-15-2022 17:40:32 | 11-15-2022 17:40:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,241 | closed | Adding doctest for `fill-mask` pipeline. | # What does this PR do?
Follow up of https://github.com/huggingface/transformers/pull/20226
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-15-2022 17:02:39 | 11-15-2022 17:02:39 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20241). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,240 | closed | Adding doctest for `feature-extraction`. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-15-2022 16:51:19 | 11-15-2022 16:51:19 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20240). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20240). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,239 | closed | Adding doctest for document-question-answering | # What does this PR do?
Follow up https://github.com/huggingface/transformers/pull/20226
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-15-2022 16:41:52 | 11-15-2022 16:41:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20239). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,238 | closed | DonutProcessor token2json too slow | ### System Info
- `transformers` version: 4.25.0.dev0
- Platform: Linux-5.4.0-1094-azure-x86_64-with-glibc2.2.5
- Python version: 3.8.15
- Huggingface_hub version: 0.11.0.dev0
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@NielsRogge
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
processor = DonutProcessor.from_pretrained("nielsr/donut-base")
sequence = (
"<s_name>John Doe</s_name><s_age>99</s_age><s_city>Atlanta</s_city>"
"<s_state>GA</s_state><s_zip>30301</s_zip><s_phone>123-4567</s_phone>"
)
# If you're not using a jupyter notebook, delete the %timeit
%timeit processor.token2json(sequence)
```
The `token2json` method does not scale well with the number of xml tags. It takes ~70ms for each tag in the sequence str. If there are 10 tags, it will take ~700ms, 20 tags ~1400ms, etc. The bottleneck in `token2json` is `self.tokenizer.get_added_vocab()`. It's called once for every tag. `get_added_vocab` takes ~70ms to run each time.
### Expected behavior
Since `get_added_vocab` isn't changing during the `token2json` call, `get_added_vocab` should only be called once and the results be reused. Here's one potential solution:
```
def token2json(self, tokens, is_inner_value=False, added_vocab=None):
"""
Convert a (generated) token sequence into an ordered JSON format.
"""
if added_vocab is None:
added_vocab = self.tokenizer.get_added_vocab()
output = dict()
while tokens:
start_token = re.search(r"<s_(.*?)>", tokens, re.IGNORECASE)
if start_token is None:
break
key = start_token.group(1)
end_token = re.search(rf"</s_{key}>", tokens, re.IGNORECASE)
start_token = start_token.group()
if end_token is None:
tokens = tokens.replace(start_token, "")
else:
end_token = end_token.group()
start_token_escaped = re.escape(start_token)
end_token_escaped = re.escape(end_token)
content = re.search(f"{start_token_escaped}(.*?){end_token_escaped}", tokens, re.IGNORECASE)
if content is not None:
content = content.group(1).strip()
if r"<s_" in content and r"</s_" in content: # non-leaf node
value = self.token2json(content, is_inner_value=True, added_vocab=added_vocab)
if value:
if len(value) == 1:
value = value[0]
output[key] = value
else: # leaf nodes
output[key] = []
for leaf in content.split(r"<sep/>"):
leaf = leaf.strip()
if leaf in added_vocab and leaf[0] == "<" and leaf[-2:] == "/>":
leaf = leaf[1:-2] # for categorical special tokens
output[key].append(leaf)
if len(output[key]) == 1:
output[key] = output[key][0]
tokens = tokens[tokens.find(end_token) + len(end_token):].strip()
if tokens[:6] == r"<sep/>": # non-leaf nodes
return [output] + self.token2json(tokens[6:], is_inner_value=True, added_vocab=added_vocab)
if len(output):
return [output] if is_inner_value else output
else:
return [] if is_inner_value else {"text_sequence": tokens}
``` | 11-15-2022 16:37:08 | 11-15-2022 16:37:08 | Hi,
Thanks for looking into this and finding the bottleneck. Would you be available to contribute this to the community by opening a PR? Thanks!<|||||>> Hi,
>
> Thanks for looking into this and finding the bottleneck. Would you be available to contribute this to the community by opening a PR? Thanks!
Yes. I just opened the [PR](https://github.com/huggingface/transformers/pull/20283). |
transformers | 20,237 | closed | Adding an example for `depth-estimation` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-15-2022 16:14:16 | 11-15-2022 16:14:16 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20237). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20237). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,236 | closed | Updating the doctest for conversational. | # What does this PR do?
Follow up https://github.com/huggingface/transformers/pull/20226
- Make it tested against
- Add explicit output in the test.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-15-2022 16:00:01 | 11-15-2022 16:00:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20236). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20236). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,235 | closed | Adding `audio-classification` example in the doc. | # What does this PR do?
Follow up of https://github.com/huggingface/transformers/pull/20226
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-15-2022 14:28:56 | 11-15-2022 14:28:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20235). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20235). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20235). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20235). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,234 | closed | Fix `run_clip.py` | # What does this PR do?
The type annotation for `Transform.forward` (in `run_clip.py`) never work when I tried several torch/pillow versions.
This forward is compiled by `torch.jit`.
With the current type annotation `x: Image`, we get error
```bash
RuntimeError:
Unknown type name 'Image':
```
With `x: Image.Image`, error is
```bash
NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
File "/home/huggingface/miniconda3/envs/py38/lib/python3.8/site-packages/PIL/Image.py", line 550
def __exit__(self, *args):
~~~~~ <--- HERE
if hasattr(self, "fp") and getattr(self, "_exclusive_fp", False):
if getattr(self, "_fp", False):
'__torch__.PIL.Image.Image' is being compiled since it was called from 'Transform.forward'
``` | 11-15-2022 14:17:57 | 11-15-2022 14:17:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,233 | closed | Fix docstring of CLIPTokenizer(Fast) | # What does this PR do?
The docstrings of `CLIPTokenizer` and `CLIPTokenizerFast` had the wrong default value for `bos_token` (`<|endoftext|>` instead of `<|startoftext|>`).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
- Documentation: @sgugger | 11-15-2022 13:48:11 | 11-15-2022 13:48:11 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20233). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,232 | closed | Slightly alter Keras dummy loss | The existing `dummy_loss` returns a scalar, but this is incorrect - Keras expects loss functions to return a vector of per-sample losses. This only causes issues when the user passes `sample_weight` to `fit()` - I'll probably adjust our tests to include that to make sure we don't regress on this in future. | 11-15-2022 13:07:39 | 11-15-2022 13:07:39 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@gante Can you take one more look at the expanded tests?
Also, the failing test looks like flakiness with a too-low threshold to me, unrelated to this PR |
transformers | 20,231 | closed | TF: add test for `PushToHubCallback` | # What does this PR do?
Adds a test to TF's `PushToHubCallback` | 11-15-2022 11:26:03 | 11-15-2022 11:26:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20231). All of your documentation changes will be reflected on that endpoint.<|||||>Could you ensure it works with `huggingface_hub` version `v0.11.0rc1`? I think there was an issue with this callback.
cc @Wauplin <|||||>To be more precise, it should already work with `v0.11.0rc1` (was not the case with `v0.11.0rc0`) BUT a warning message should be triggered "Creating a repository through 'clone_from' is deprecated". To prevent this warning, `PushToHubCallback` should create the repo before cloning it. Something like:
```py
create_repo(repo_id, exists_ok=True)
repo = Repository(clone_from=repo_id)
```<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20231). All of your documentation changes will be reflected on that endpoint.<|||||>All tests are now passing with `HUGGINGFACE_CO_STAGING=1 py.test tests/test_modeling_tf_common.py::TFModelPushToHubTester -vv`, tested against version `0.10.1` of the hub.
@Wauplin I've factored in the `create_repo()` before cloning the repo ๐
@Rocketknight1 it was indeed failing because of the lack of `compile()`. After the fix, I ran against a problem where the last epoch was not being stored (because of the process kill). Let me know if you agree with the changes in `PushToHubCallback`<|||||>Nice, thanks for making the change @gante ! |
transformers | 20,230 | closed | [ASR Examples] Update README for Whisper | # What does this PR do?
Summarises changes from https://github.com/huggingface/transformers/pull/19519 and the blog post at https://huggingface.co/blog/fine-tune-whisper, providing examples for fine-tuning Whisper using the example script `run_speech_recognition_seq2seq.py`.
Required some re-jigging of the Speech-Encoder-Decoder Model examples in the section "[Automatic Speech Recognition with Sequence-to-Sequence](https://github.com/huggingface/transformers/pull/20230#sequence-to-sequence)". | 11-15-2022 11:10:51 | 11-15-2022 11:10:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,229 | closed | Add AutoBackbone + ResNetBackbone | # What does this PR do?
As #20204 is a big PR, this PR adds a part of it as a standalone PR. This PR adds the AutoBackbone class, along with one example class that it supports, namely ResNetBackbone.
## Usage
Usage is as follows:
```
from transformers import AutoImageProcessor, AutoBackbone
import torch
from PIL import Image
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50")
model = AutoBackbone.from_pretrained("microsoft/resnet-50", out_features=["stage1", "stage2", "stage3", "stage4"])
inputs = processor(image, return_tensors="pt")
outputs = model(**inputs)
for k,v in zip(outputs.stage_names, outputs.hidden_states):
print(k, v.shape)
```
which prints:
```
stage1 torch.Size([1, 256, 56, 56])
stage2 torch.Size([1, 512, 28, 28])
stage3 torch.Size([1, 1024, 14, 14])
stage4 torch.Size([1, 2048, 7, 7])
```
Besides this, one can also obtain information about the channel dimension and stride for each of the requested stages, like so:
```
print(model.channels)
print(model.strides)
```
This is handy as frameworks (like MaskFormer) need to know this information at initialization time.
## To do's/questions
- [ ] We don't want `xxxBackbone` classes to be tested by all tests defined in `test_modeling_common.py`(i.e. it should probably not be part of `all_model_classes`), hence I added the class to IGNORE_NON_TESTED, and added a separate test for it. Let me know if this is ok.
- [ ] It would probably be best to not have our backbones included in the documentation from the start. For now they are just an internal part of models like DETR and MaskFormer. Could we not include them in the docs for now? Currently I'm getting:
```
Exception: The following objects are in the public init so should be documented:
- AutoBackbone
- ResNetBackbone
```
An alternative option could be to add backbones to the list of PRIVATE_MODELS in utils/check_repo.py for now. | 11-15-2022 10:53:46 | 11-15-2022 10:53:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.<|||||>> My main question is about how the backbone is loaded and saved alongside our current models. For example, if the backbone has non-default settings - where is this information saved? Is it part of the model e.g. DETR config?
A backbone can be loaded and saved just like any other model in the library (due to the inheritance of `PreTrainedModel`), either using a config to initialize the backbone with randomly initialized weights or using the `from_pretrained` method to load pre-trained weights.
> Can we assume the backbone is always frozen, or are there cases when people might want to fine-tune it? In this case, how would the backbone weights be saved out?
The backbone is always meant to be fine-tuned together with the backbone, I've not seen a case where the backbone is kept frozen for now, but we could definitely add a `freeze` method in case people want to do that.<|||||>@sgugger the remaining CI issue is about the fact that AutoBackbone and ResNetBackbone are not documented, however I'd like to actually keep them away from the docs for now. However I need to keep ResNetBackbone in the main init for it to work with the Auto API.<|||||>@NielsRogge Thanks for your answers. The backbone can definitely be saved with `save_pretrained`, my question is really about how that's bundled together with a model. For example, if I wanted a new model, with a new-non-standard backbone - is this how I would create it?
```
from transformers import AutoConfig, AutoBackbone, AutoModelForXXX
# Modify the default backbone configuration
backbone_config = AutoConfig(backbone_repo)
backbone_config.param = new_value_0
# Modify the default model configuration
model_config = AutoConfig(model_repo)
model_config.param = new_value_1
model = AutoModelForXXX(model_config, backbone_config)
```
Or is the backbone configuration part of the model configuration?
```
from transformers import AutoConfig
model_config = AutoConfig(model_repo)
model_config["backbone"]["param"] = new_value_0
model_config.param = new_value_1
model = AutoModel(model_config)
```
or something else?<|||||>@NielsRogge You can add them to [this list](https://github.com/huggingface/transformers/blob/0d0d77693f79c7f7d39bba6921cc9741f00de988/utils/check_repo.py#L665) for now.<|||||>The way frameworks like DETR and MaskFormer can use a backbone can be as follows. In their configuration, e.g. `MaskFormerConfig`, they should have a `backbone_config` attribute, which can be set to ResNetConfig for instance, like so:
```
from transformers import ResNetConfig, MaskFormerConfig
backbone_config = ResNetConfig(hidden_sizes=...)
config = MaskFormerConfig(backbone_config=backbone_config)
````
So yes, the backbone configuration is part of the model configuration.
Then, inside modeling_maskformer.py, one can instantiate the backbone like so:
```
from transformers import AutoBackbone
self.backbone = AutoBackbone.from_config(config.backbone_config)
```<|||||>@NielsRogge Great - thanks for clarifying :)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger @amyeroberts feel free to approve :) <|||||>This has been fixed in https://github.com/NielsRogge/transformers/commit/1863e8ff5dcaf29333d5bd08f266a562d36a8ee6, have marked the conversation as resolved<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20229). All of your documentation changes will be reflected on that endpoint.<|||||>Failing tests are unrelated (there seems to be an issue with the cache of MarkupLM and mBART50 on the CI), merging. |
transformers | 20,228 | closed | Remove `authorized_missing_keys`in favor of _keys_to_ignore_on_load_missing | # What does this PR do?
Just cleans up a bug/typo discoverd. `_keys_to_ignore_on_load_missing` is supported but not `authorized_missing_keys`.
| 11-15-2022 10:49:31 | 11-15-2022 10:49:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20228). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,227 | closed | Fix Tapas/Scatter device issue | # What does this PR do?
Fix Tapas/Scatter device issue | 11-15-2022 10:10:53 | 11-15-2022 10:10:53 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,226 | closed | Adding ASR pipeline example. | # What does this PR do?
This is the first PR of a series that will aim to add examples for ALL pipelines.
The example uses `pipeline` instead of the documented `AutomaticSpeechRecognitionPipeline` intentionally.
It break my personal idea of documenting really what we're documenting, but it makes the example a lot more simple and enables to link to the pipeline tutorial for the "orthogonal" arguments (notably `batch_size`.)
https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline.example
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 11-15-2022 08:55:31 | 11-15-2022 08:55:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20226). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,225 | open | Whisper: timestamp tokens are missing in the tokenizer vocabulary | ### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
### Who can help?
@ArthurZucker
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The vocabulary size returned by the `WhisperTokenizer` does not match the vocabulary size reported in the configuration `config.vocab_size`. The timestamp tokens are missing in the tokenizer vocabulary. Consider this example:
```python
import transformers
tokenizer = transformers.WhisperTokenizer.from_pretrained("openai/whisper-tiny")
config = transformers.WhisperConfig.from_pretrained("openai/whisper-tiny")
vocab = tokenizer.get_vocab()
print(len(vocab) == config.vocab_size) # prints False
for i in range(1500 + 1):
timestamp = "<|%.2f|>" % (i * 0.02)
vocab[timestamp] = len(vocab)
print(len(vocab) == config.vocab_size) # prints True
```
The token surface used in the code snippet is copied from the reference implementation:
https://github.com/openai/whisper/blob/9f70a352f9f8630ab3aa0d06af5cb9532bd8c21d/whisper/tokenizer.py#L151
### Expected behavior
The vocabulary size returned by the tokenizer should match the model vocabulary size. | 11-15-2022 08:53:49 | 11-15-2022 08:53:49 | Hey! Though I agree with you on the fact that normally the tokenizer vocab size is the same as the model's, in this case, the original model was similar. The `timestamp` tokens are all outside vocabulary and decoded as `""` with the fast `GPT2FastTokenizer` in the original code. The `WhisperTokenizer` was adapted to follow this, in order to not bother with the tokens that are only used with the `timestamp_logits_processor`. Indeed, all the extra tokens (>50363) are treated as timestamps prediction and "ignored".
cc @LysandreJik as we had a lot of issue with other models, there were discussion on whether to always add the extra tokens or not?
<|||||>Thank you for the explanation! Feel free to close this issue if you want to keep it this way. I can work around it in my own code.
For the context, I'm converting some Transformers models to another format and I want to always match the tokenizer vocabulary size to the model vocabulary size. In many cases I need to add some tokens (most often the "madeupword" to pad the vocabulary to a multiple of 8) and sometimes I need to remove some (e.g. for `facebook/bart-large-cnn` the tokenizer has 1 additional token for some reasons). It would be great if `len(tokenizer.get_vocab())` is always consistent with the model vocabulary size.<|||||>Since the original OpenAI implementation moved from HF tokenizers to their own Tiktoken library, it seems timestamp tokens are now handled and converted to token ids. Right now the timestamps tokens in HF are handled as strings instead of individual tokens.
```python
from transformers import WhisperTokenizer
from whisper.tokenizer import get_tokenizer
hf_tok = WhisperTokenizer.from_pretrained("openai/whisper-tiny")
openai_tok = get_tokenizer(multilingual=True, language="en", task="transcribe")
openai_tok.encode("<|1.00|>", disallowed_special=[])
#ย [27, 91, 16, 13, 628, 91, 29]
hf_tok.encode("<|1.00|>", add_special_tokens=False)
# [27, 91, 16, 13, 628, 91, 29]
openai_tok.encode("<|1.00|>", allowed_special=set(openai_tok.special_tokens.keys()))
# [50414]
hf_tok.encode("<|1.00|>", add_special_tokens=True)
# [50258, 50363, 27, 91, 16, 13, 628, 91, 29, 50257]
```
Could it be the time to revisit this issue?<|||||>Nope, we also added support for decoding with timestamps. For that you just need to specify the `decode_with_timestamps` see [here](https://github.com/ArthurZucker/transformers/blob/4a9e17b0ab3f35f9be92b82e7f783d043c0ca161/src/transformers/models/whisper/tokenization_whisper.py#L493)<|||||>Yeah, but if you want to train using the right timestamps tokens, there's no support for that AFAIK. We had to add the tokens manually. The encoding function is a bit more convoluted to modify to support encoding of the timestamps tokens with a flag like it's now implemented for decoding.<|||||>Then we should probably add them to `added_tokens_encoder` and refactor a bit the tokenizer for encoding decoding wdyt @sanchit-gandhi @hollance <|||||>Yep I agree - took a look through and @versae is spot on, the new OpenAI tokenizer has these tokens as part of their tokenizer, so they can be encoded directly. We should follow suit and update our encoding function accordingly<|||||>Also if they are part of the special tokens, they will not be skipped by default, and would have to be skipped using `skip_special_tokens = True`. But should be alright! @versae feel free to open a PR and ping me if you have time, otherwise I might be able to tackle that in 2 weeks<|||||>Hey! Sorry, haven't had the time to properly implement this, but I can confirm than using `AddedToken`s works well ๐<|||||>I'll open a PR, I am not entirely sure just using added tokens will solve this. We need backward compatibility so I'll add a new argument like `encoder_special`. Will see<|||||>Ok! So it seems that `skip_special_tokens` when encoding will make it's way to transformers ๐
<|||||>Keep us posted! |
transformers | 20,224 | closed | Question about layer norm in T5 | I notice that there is no bias and no subtraction of mean in layer norm.
I understand no bias but I'm confused about the meaning of computing variance without subtraction of mean.
Normally, we compute variance, for example:
<img width="491" alt="image" src="https://user-images.githubusercontent.com/7160927/201863288-bcb63739-0f74-4487-bbb8-91299dad2eef.png">
But it's different here. Why is that?
Hoping some explanation. | 11-15-2022 08:05:58 | 11-15-2022 08:05:58 | Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep issues for bugs and feature requests only.<|||||>Sure. have created this topic [question-about-layer-norm-in-t5](https://discuss.huggingface.co/t/question-about-layer-norm-in-t5/26142). I'd close this issue. |
transformers | 20,223 | closed | Yolo model trigger win10/11 reboot on Intel 13900k | ### System Info
All Yolos models/pipeline case my pc reboot immediately on windows 10 and 11 system ( WSL in win11 and Ubuntu(not virtual) work well on the same machine)
1. I reinstalled the os system and tested (both win10 and win11) two times and make sure all the drivers are up to date. The results are the same. (I test the system with Aida64 system stability test, all ok)
2. I tried Ubuntu(not virtual) and WSL in windows 11, both works ok.
3. I tried other models, such as Bert, all works fine.
``` python
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-tiny')
model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny')
inputs = feature_extractor(images=image, return_tensors="pt")
for _ in range(100):
model(**inputs) # this line causes the PC reboot. (usually after several loops)
```
Hardware info:
CPU: Intel 13900k
Motherboard: ASUS Z690-G, bios 2103
GPU: Intel 13900k
Software info:
Conda env latest
Python 3.10.4
transformers 4.24.0
windows 10 and windows 11
My guess it is caused by the incompatible of the hardware.
Could you please help to take a look?
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Use python 3.10 on windows 10/11 with Intel 13900k
run the code below
``` python
from transformers import YolosFeatureExtractor, YolosForObjectDetection
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
feature_extractor = YolosFeatureExtractor.from_pretrained('hustvl/yolos-tiny')
model = YolosForObjectDetection.from_pretrained('hustvl/yolos-tiny') # this line cases the PC reboot.
inputs = feature_extractor(images=image, return_tensors="pt")
for _ in range(100):
model(**inputs) # this line causes the PC reboot. (usually after several loops)
```
### Expected behavior
The pc reboot immediately after several loop | 11-15-2022 07:55:28 | 11-15-2022 07:55:28 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>How fast does it work with transformers?
13900k compared to gpu? |
transformers | 20,222 | closed | Quick fix for `test_torch_fx` under torch 1.13 | # What does this PR do?
Quick fix for `test_torch_fx` under torch 1.13.
Since `torch 1.13`, using `meta` device gives some noisy tensors, which breaks `test_torch_fx`.
@michaelbenayoun is more qualified than me to explain this more clearly.
This PR changes the device to "cpu" if `fx.py` is used in the testing.
running all `test_torch_fx_*` tests:
- with "**meta**": 1m33s
- with "**cpu**": 1m45s
So the speed-up of using `meta` is tiny , and we can just use `cpu`.
However,
**Questions**:
- It looks like `fx.py` is only used in the testing (at least in `transformers`)?
- Is this module in `optimum`?
- If so:
- is it used only in testing?
- is it used for large real models? | 11-15-2022 07:53:38 | 11-15-2022 07:53:38 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20222). All of your documentation changes will be reflected on that endpoint.<|||||>Apart from the exceptional case where we don't want to use the prod server for testing, I am very much against using env variable/constants that change for testing. Why would the test fail for the meta device but in real life the code would work? It doesn't make any sense.<|||||>Well, using `meta` with `torch 1.13` gives random values, which fail some **input checks** in some modeling files, like the one at the end.
If this is the intended behavior of `meta` (i.e. if the behavior in PT < 1.13 was wrong - but we took advantage of it), then moving away from `meta` seems the way to me.
If you still think we should stick with `meta`, I will leave @michaelbenayoun to take care of the related test though :-)
#### More details
(notice that here we don't provide any input tensor)
```ptyon
traced_model = symbolic_trace(model, input_names)
```
During this call, in `fx.py`, there are tensor preparation, which use `meta` and it gives random values which fail the check in some modeling files
(bart)
```python
eos_mask = input_ids.eq(self.config.eos_token_id)
if len(torch.unique_consecutive(eos_mask.sum(1))) > 1:
raise ValueError("All examples must have the same number of <eos> tokens.")
```<|||||>close for now, unless one day we need this. |
transformers | 20,221 | closed | Unable to use BERTmodel | ### System Info
Transformer version = 4.24.0
Python = 3.8
Error: SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /bert-large-uncased/resolve/main/config.json (Caused by SSLError(SSLError(136, '[X509: NO_CERTIFICATE_OR_CRL_FOUND] no certificate or crl found (_ssl.c:4264)')))
@LysandreJik
### Who can help?
@LysandreJik
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
from summarizer import Summarizer
body = 'Text body that you want to summarize with BERT'
body2 = 'Something else you want to summarize with BERT'
model = Summarizer()
model(body)
model(body2)
### Expected behavior
Load the summaries | 11-15-2022 06:31:57 | 11-15-2022 06:31:57 | Are you behind some firewall? It seems like you can't connect hugging face. If you are able to freely access internet with proxy you could yous this snippet:
```python3
import os
os.environ['HTTP_PROXY'] = 'http://proxy_url:proxy_port'
os.environ['HTTPS_PROXY'] = 'http://proxy_url:proxy_port'
```
otherwise you can manually download model and load it from disk<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,220 | closed | Fixing Spelling Error in Testing Documentation - Issue #20194 | # What does this PR do?
This PR fixes a spelling error in the testing documentation. Changes "checkt" to "check".
Fixes #20194
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 11-15-2022 01:08:07 | 11-15-2022 01:08:07 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20220). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,219 | closed | Add Neighborhood Attention Transformer (NAT) and Dilated NAT (DiNAT) models | # What does this PR do?
This PR adds [NAT](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer/blob/main/NAT.md) and [DiNAT](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer/blob/main/DiNAT.md) and their dependencies.
## Dependencies
- [NATTEN](https://github.com/SHI-Labs/NATTEN/) is the only new requirement. The models themselves are mostly in the same style as most `timm` models. They just require NATTEN to get the sliding window attention.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
- Yes, mostly boilerplate from similar models.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @NielsRogge @amyeroberts @sgugger | 11-14-2022 22:53:22 | 11-14-2022 22:53:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20219). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20219). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20219). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20219). All of your documentation changes will be reflected on that endpoint.<|||||>In this instance, I'd actually prefer to rely on the extra dep (as long as it's properly set up as a soft dependency, which seems to be the case in the PR). We don't know how to maintain CUDA kernels anyway, so support will be a lot better if it's done elsewhere.<|||||>Hi @NielsRogge @sgugger
Actually we're happy to do it either way, but the reason we packaged NATTEN as a pip package in the first place is to make installation easier, especially since we plan to upgrade it frequently.
Unlike Deformable Attention's extension, NATTEN it doesn't come with a fixed set of kernels. There's still improvements that we've planned ahead to NATTEN, especially adding new kernels to optimize latency.
And just to confirm @sgugger 's comment, maintaining all the kernels in NATTEN might increase your wheel sizes, which I'm not sure if you want to do. The cpu-only wheels aren't too bad, but the ones with cuda wheels are up to 50MB.
And as far as testing CUDA kernels go, you'd need to have unit tests to check the backwards functions (gradcheck), and running those for all different use cases that call different kernels is just really time consuming (and we only pull it off by running it on 80GB A100 GPUs; it's so memory-intensive).
And yes, as @sgugger stated, it would work as a soft dependency; even imports aren't broken. But there's dummy calls to the package in case it's not available, that will raise an error only when the forward functions are called.
As for the torch tests, I only did those as a suggestion. I would personally recommend having a separate test for these models in general so that it doesn't get in the way. Additionally, knowing the torch build beforehand is better, since that way we can just specify a wheel URL and have it just install a lot faster.
I'll make the changes to the docs and run fix-copies again.
And yes, both models were cloned off of Swin; the architectures are somewhat similar.
The difference here is that there's a convolutional tokenizer and downsampler replacing the patch+embed and patch merging layers; and we like to keep tensors in shape B H W C, since NATTEN also expects the height and width axes to be unrolled, like a Conv2d. <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20219). All of your documentation changes will be reflected on that endpoint.<|||||>Actually, I just noticed `transformers` doesn't come with wheels, right?
My previous statement about wheel sizes is irrelevant in that case.
However, I would shift more towards @sgugger 's point of view, since loading torch extensions at runtime becomes less and less reliable as extensions grow, and NATTEN already has twice the number of kernels compared to MSDeformAttn (excluding the templating that goes on in NATTEN). This would have the users wait up to 5 minutes before being able to use these models, and would affect reproducibility (because the torch build's cuda version doesn't necessarily match the system's, or the expected one for that matter).
FWIW, I've definitely seen libraries take one of three approaches:
* either adding a C backend to their package for all custom operations and build wheels (`detectron2`, `mmcv`);
* or just having soft dependencies to pip packages that already do that to avoid the hassle. It also doesn't create new issues with upgrades to CUDA or torch (which depending on their usage can break things)
* And of course there's still the option of lazy loading (the way MSDeformAttn is handled right now), which is honestly a great alternative to both, but only as long as the kernels aren't being updated and compile time is relatively low.
<|||||>@sgugger All done.
And sorry I didn't go with directly applying your changes, some of them needed a few more replacements so I figured I'd make the changes and commit all of them at once.<|||||>Thanks! You'll need a rebase on main to get the tests to pass as TensorFlow new release breaks the CI<|||||>Thanks for the reviews and feedback @sgugger @NielsRogge @amyeroberts .
Looking forward to contributing more in the future. |
transformers | 20,218 | closed | Generate: add generation config class | # What does this PR do?
Adds the class that handles generation configurations. From this class, we can then start working on the interaction with other parts of the Hugging Face ๐ค ecosystem. See https://github.com/huggingface/transformers/issues/18655 for the original plan, which matches this implementation.
Throughout the PR, there are a few very visible comments where your input would be appreciated to establish solid base functionality.
The expected follow-up tasks are:
1. We can load a Generation Config from a Model Config (to make it compatible with all existing repos)
2. `.generate()` inherits default values from generate config, instead of from model config. Add an argument to pass a generation configuration.
3. Add doctests for loading, storing, and using with `.generate()`
4. The top 100 models with generate compatibility get a PR with the corresponding generation confg file(s). At least 1 has multiple generation config files, to be used as an example.
5. (bonus) Generate config validates its attributes at initialization time
6. (bonus) Pipelines make automatic use of the appropriate generation config file
7. (bonus) Ensure that advanced functionality (e.g. models like RAG and Trainer) operates well with the generation config file | 11-14-2022 19:59:29 | 11-14-2022 19:59:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20218). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20218). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20218). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger @patrickvonplaten all comments addressed. There is a doctest example that loads our first `generation_config.json`, [from gpt-2](https://huggingface.co/gpt2/blob/main/generation_config.json) ๐
I took the liberty to add a few goodies, like making it a public class (to allow `from transformers import GenerationConfig`), adding a `__repr__` (for a preview) and sections in the `__init__` docstring (I think users will appreciate). |
transformers | 20,217 | closed | remaining pytorch type hints | # What does this PR do?
Type hints
@Rocketknight1 | 11-14-2022 18:43:23 | 11-14-2022 18:43:23 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20217). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20217). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20217). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20217). All of your documentation changes will be reflected on that endpoint.<|||||>@Rocketknight1 There is only 1 left: EsmForProteinFolding. when I tested test_modeling_esmfold.py before doing any changes to the file some tests where initially getting failed.

<|||||>@IMvision12 Those tests shouldn't be failing anymore, but don't worry - I'm happy to accept all the other changes! |
transformers | 20,216 | closed | Infer pretrained base model and tokenizer used from a fine-tuned model_dir | ### Feature request
I would like to infer the pretrained base model and the tokenizer from the model_dir of a fine-tuned model. Is that possible with the default trainer setup? What I did previously was to programmatically copy all the tokenizer data into each checkpoint folder. But this seems cumbersome.
### Motivation
This reduces overhead during inference/prediction. All I have to specify is just the model_dir.
### Your contribution
Given enough guidance, I am happy to submit a PR. | 11-14-2022 18:12:26 | 11-14-2022 18:12:26 | Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,215 | closed | Why is tokenized-text Non-Fast when it is of type BatchEncoding and the tokenizer is Fast? | ### System Info
Huggingface Transformers version **4.21.1**
PyTorch version **1.12.1+cu116**
Python version **3.10.4**
Platform Ubuntu **22.04.1 LTS**
**transformers-cli env**
Traceback (most recent call last):
File "/home/vin/.local/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "/home/vin/.local/lib/python3.10/site-packages/transformers/commands/transformers_cli.py", line 24, in <module>
from .pt_to_tf import PTtoTFCommand
File "/home/vin/.local/lib/python3.10/site-packages/transformers/commands/pt_to_tf.py", line 21, in <module>
from datasets import load_dataset
ModuleNotFoundError: No module named 'datasets'
### Who can help?
@LysandreJik, @SaulLu
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
### Step 1: Text is tokenized as follows:
```
batch_nnIn_ids = self.tokenizer(text=batch_histories,
text_pair=batch_user_input_pretok,
is_split_into_words=True,
padding=True,
truncation='only_first',
return_tensors='pt',
return_token_type_ids=False,
return_attention_mask=True,
return_overflowing_tokens=False)
```
### Step 2A: Tokenized text is put in a dictionary as follows:
```
batch = {
'user_input_pretok': batch_user_input_pretok,
'nnIn_ids': batch_nnIn_ids,
'ids': batch_ids,
'labels': batch_token_labels
}
return batch
```
### Step 2B: Using PDB, I check that the tokenized text is of type BatchEncoding and that the tokenizer is Fast:
```
(Pdb) type(batch['nnIn_ids']), batch['nnIn_ids'].is_fast, self.tokenizer.is_fast
(<class 'transformers.tokenization_utils_base.BatchEncoding'>, True, True)
Note that the tokenizer is Fast, batch['nnIn_ids'] is Fast and it is of type BatchEncoding
```
### Step 3: Control goes to Lightning, and then to the predict function where I use PDB to make the same checks as follows:
```
(Pdb) type(batch['nnIn_ids']), batch['nnIn_ids'].is_fast, self.tokenizer.is_fast
(<class 'transformers.tokenization_utils_base.BatchEncoding'>, False, True)
```
Why did batch['nnIn_ids'] switched to non-Fast from Fast?
Note that the tokenizer is Fast but batch['nnIn_ids'] is Non-Fast even though it is of type BatchEncoding
### Expected behavior
A text is tokenized using a Fast tokenizer, and the tokenized-text is of type BatchEncoding and it is also Fast.
After this tokenized text is passed to another function through Lightning, it becomes non-Fast even though the tokenized-text remains of type BatchEncoding and the tokenizer remain Fast. The expected behavior of the tokenized-text should remain Fast.
| 11-14-2022 18:00:00 | 11-14-2022 18:00:00 | cc @ArthurZucker <|||||>Hey, could you provide a full reproduction script? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,214 | closed | Allow trainer to return eval. loss for CLIP-like models | # What does this PR do?
Allow trainer to give **evaluation** loss for CLIP-like models.
Currently, this line
https://github.com/huggingface/transformers/blob/07d8d6e2f7a920d399e5e86a82d78179cdfa6746/src/transformers/trainer.py#L3192
gives `has_labels = False` for CLIP-like models, and can't give loss value in the evaluation.
without this PR:
```bash
***** eval metrics *****
epoch = 1.0
eval_runtime = 0:00:01.67
eval_samples_per_second = 9.571
eval_steps_per_second = 4.785
```
with this PR.
```bash
***** eval metrics *****
epoch = 1.0
eval_loss = 0.8159
eval_runtime = 0:00:01.66
eval_samples_per_second = 9.598
eval_steps_per_second = 4.799
``` | 11-14-2022 16:53:51 | 11-14-2022 16:53:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Hopefully the change covers everything that could happen now and in the future. |
transformers | 20,213 | closed | Generate: add Bloom fixes for contrastive search | # What does this PR do?
Bloom has a different cache format, where the batch size and the number of heads are packed in a single dimension. Contrastive search needs to manipulate the cache at the batch dimension, so naturally it fails.
This PR adds functionality to convert Bloom's cache back and forth between its own format and the standard cache format. Then, propagates the use of these new functions to places where the conversion logic was already being used, and finally fixes Bloom's contrastive search.
All slow tests are passing.
____________________________________
This fix was also requested [here](https://huggingface.co/spaces/joaogante/contrastive_search_generation/discussions/1#636e23d1c441b42489215026) | 11-14-2022 15:50:39 | 11-14-2022 15:50:39 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20213). All of your documentation changes will be reflected on that endpoint.<|||||>(rebasing to include #20200 in CI, the related test was failing)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20213). All of your documentation changes will be reflected on that endpoint.<|||||>Stumbled on the same issue and found this fix. Thanks lots! |
transformers | 20,212 | closed | The Document of Pipelines are not complete. | ### System Info
transformers 4.24.0
### Who can help?
@sgugger @stevhliu
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.FillMaskPipeline.__call__
The "\_\_call\_\_" function only describes what will happen when I pass a list of samples into the pipeline. However, this function also receives other input, such as a dataset: https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#pipeline-batching
The document currently doesn't have a description of what will happen if I pass a dataset to the pipeline. This bothers me a lot. For example, when I pass a dataset and use batch_size=64 in this pipeline, I expect that I will get the results of 64 samples. However, the pipeline only returns the result of the next sample and I need to iterate 64 times to get the results of the whole batch. This doesn't involve any fatal error but is very confusing.
### Expected behavior
add a description of passing a dataset to a pipeline. | 11-14-2022 15:38:02 | 11-14-2022 15:38:02 | cc @Narsil <|||||>Is this doc more helpful ?
https://huggingface.co/docs/transformers/v4.24.0/en/main_classes/pipelines
We definitely need to update the docs for pipeline overall.
<|||||>This doc still doesn't mention that results will be returned one by one (instead of in a batch) if I pass a dataset to the pipeline.<|||||>https://github.com/huggingface/transformers/pull/20437
Don't hesitate to comment on the PR if you feel it's not clear enough.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,211 | closed | prepare for "__floordiv__ is deprecated and its behavior will change in a future version of pytorch" | # What does this PR do?
Should adress the `__floordiv__` warnings mentionned in #19934. Divinding torch tensor using `//` is no longer supported and has to be done via `torch.div`.
| 11-14-2022 15:30:34 | 11-14-2022 15:30:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,210 | closed | [DPR] fix unexpected "pooler" keys | # What does this PR do?
Fixes #19111, by adding [r"pooler"] to the list of ignored unexpected keys for both the `DPRPretrainedContextEncoder` and `DPRPretrainedQuestionEncoder`. | 11-14-2022 15:28:31 | 11-14-2022 15:28:31 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20210). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,209 | closed | Add gpt-sw3 model to transformers | This adds the gpt-sw3 models and tokenizer to hf. The models are developed by AI Sweden and others. They are gpt models trained from scratch with the nemo-megatron framework and will initially range in sizes from 128m to 20B. The models are multilingual and the languages in the models are English, Swedish, Norwegian, Danish and Icelandic.
Fixes # (issue) https://github.com/huggingface/transformers/issues/20176
@ArthurZucker | 11-14-2022 14:04:00 | 11-14-2022 14:04:00 | Hey! Feel free to ping me if you need any pointers! :)
5seems like the history is a bit broken at this point `rebasing` with a force push should help. <|||||>> Hey! Feel free to ping me if you need any pointers! :) 5seems like the history is a bit broken at this point `rebasing` with a force push should help.
Hi thank you for the help, did a rebase!
We are soon going to put the models public on the hub, after that we hope that we are close to being able to make the PR ready for review.
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>We have finally reached the point where most of the work seems to be done and would appreciate any feedback or a merge. The models haven't been made public yet but will be shortly. @ArthurZucker <|||||>> Thanks for the great work! My biggest question here is : what are the architectural differences with `gpt2`? If there are none, I am not sure that we actually need to add any modelling files
So after reimplementing the code from nemo Megatron it turns out that with float 32 we get the exact same output between our implementation and the hf gpt-2 implementation and we reverted back to copying the GPT-2 hf implementation. So in the end we might not need it although we would of course be happy to have our model name featured as a class, the same as bloom and opt. The tokenizer on the other hand implements new behaviour.
What is your opinion on the model class?<|||||>@ArthurZucker just checking in that we adressed your comments! Thank you. We had a couple of questions otherwise it looks good from our end ๐คฉ<|||||>Actually, it seems like the modeling code is exactly the same as for GPT2? In this case you can just set in the auto-mappings a correspondance `("gpt-sw3", "GPT2Model")` without needing to add a new model module.<|||||>Yep sorry for the late reply! Let's do the same as what was done with [BertJapanese](https://huggingface.co/docs/transformers/model_doc/bert-japanese). I'll review again sorry for not realising sooner `# Copied from` ๐ <|||||>> Actually, it seems like the modeling code is exactly the same as for GPT2? In this case you can just set in the auto-mappings a correspondance `("gpt-sw3", "GPT2Model")` without needing to add a new model module.
Thank you for your feedback, we're happy to follow your lead on how to proceed! So, if we understand you correctly, we should then remove `modeling_gpt_sw3.py`, `configuration_gpt_sw3.py` entirely?
> Yep sorry for the late reply! Let's do the same as what was done with [BertJapanese](https://huggingface.co/docs/transformers/model_doc/bert-japanese). I'll review again sorry for not realising sooner `# Copied from` sweat
Should we await further review or simply get started on this?
@sgugger @ArthurZucker <|||||>Yes, that would be easier. Just remove the model and config files and in the auto mapping, use the GPT2 classes.<|||||>Thank you again for your help, I hope we have now resolved all of your issues. Do you see anything else required from our side in this PR? @sgugger @ArthurZucker <|||||>A last nit @JoeyOhman , could you add an example of a pretrained model that you released being loaded in the doc? Like what is done with `BertJapanese` [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20209/en/model_doc/bert-japanese). Would help people to understand that they can use the `GPT2` model with this tokenizer ๐ <|||||>Hey @ekgren could you add the correct checkpoints? They are probably private.
See our CI fail [here](https://github.com/huggingface/transformers/actions/runs/3681682680/jobs/6228686391#step:8:1027) <|||||>@sgugger @ArthurZucker Thank you for all the help and guidance! We have made all the tokenizers reffered to in the PR public.
We encountered some internal issues with the model sharing in the last minute, very sorry for that. Currently we are not allowed to share the model files publicly. However we can share the tokenizer and would very much like for it to be included in huggingface, since those with private access to the model easily can use the full hf ecosystem. We hope to be able to share the models fully public in the near future.
Hopefully our PR can still be included in the release now that the tests should pass.<|||||>No problem, I was thinking about the tokenizer rather than the actual checkpoints! You were mostly adding a tokenizer so I don't really see an issue with this ๐ Thanks for the contribution! |
transformers | 20,208 | closed | Generate: TF sample doctest result update | # What does this PR do?
Updates sample's output to match the [expected output in CI](https://github.com/huggingface/transformers/actions/runs/3457499228/jobs/5770974732), which has a GPU.
(The previous output was for a CPU device -- sampling has different outputs for the same seed depending on the hardware, due to implementation differences.) | 11-14-2022 14:02:59 | 11-14-2022 14:02:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20208). All of your documentation changes will be reflected on that endpoint.<|||||>> But here do you mean some specific implementation differences in the generation method?
No, all differences are just at hardware level :) |
transformers | 20,207 | closed | Pipeline only returns the result of the first sample in a batch | ### System Info
transformers 4.24.0
### Who can help?
@Nars
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
I just used the code written on https://huggingface.co/docs/transformers/main_classes/pipelines, as follows:
```
from transformers import pipeline
from transformers.pipelines.pt_utils import KeyDataset
import datasets
dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
pipe = pipeline("text-classification", device=0)
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
print(out)
# [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
# Exactly the same output as before, but the content are passed
# as batches to the model
```
The problem is the variable only contains one result of the first sample in a batch, even though the batch size is 8.
If I write:
```
[x for x in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first")]
```
I will get the results of all samples.
### Expected behavior
The batch size is 8 so the variable "out" should contain 8 results (8 lists of dicts). | 11-14-2022 13:44:09 | 11-14-2022 13:44:09 | |
transformers | 20,206 | closed | ONNX support for encoder/decoder separately | ### Feature request
Hello, thanks for your work!
I am trying to leverage NVIDIA's TRITON inference server to speed up generation for Blenderbot model for production.
If I export the model in ONNX format, it exports the entire model whereas generative models go through a separate encoder->decoder cycles(if I understand correctly).
Would you be able to point me towards a direction on how I would be able to manage this?
I've tried:
1. writing an ONNX config and generating the model.onnx files separately for each encoder & decoder. Got stuck at converting encoder outputs to decoder inputs and such, mainly due to mismatch in ONNX model outputs and generative model outputs(such as hidden layers)
2. using the whole model exported as onnx and loading it into TRITON inference server. Also stuck at generation parameters & handling input/outputs, reason same as above.
### Motivation
To use generative model for blenderbot 3 on TRITON inference server
### Your contribution
Would love to make it work and share the results, either through PR for separate encoder/decoder onnx configs or sharing the results. | 11-14-2022 12:55:25 | 11-14-2022 12:55:25 | Might be of interest to @lewtun <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @yanksyoon we have a new ONNX exporter in `optimum` [docs](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model) that splits the encoder and decoder separately for fast inference with T5 models. I recommend opening your feature request there
cc @fxmarty <|||||>Hi @yanksyoon , indeed Triton Inference Server should be quite good! For reference, the latest release should help to export separately encoder/decoder, see the section "Extended ONNX export for encoder-decoder and decoder models": https://github.com/huggingface/optimum/releases/tag/v1.6.0
We are very open to contributions to make the use of Triton Inference Server for decoder models smoother in Optimum. Feel free to open an issue there so that we can track and help!<|||||>@fxmarty @lewtun Thanks for the guidance! Will have a look this weekend :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,205 | closed | Make size_dict conversion logs clearer | # What does this PR do?
* Tidies up logic for converting `size` parameter to the expected dictionary format for image processors.
* Adds `param_name` as a flag so logs reflect the variable being updated e.g. `crop_size` versus `size`
Address part of #20185 - trying to make the logs clearer.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 11-14-2022 12:52:07 | 11-14-2022 12:52:07 | @sgugger @patrickvonplaten The quickest and easiest remedy I had was adding a `param_name` flag for the `get_size_dict` function. This way, if an image processor has both `size` and `crop_size` variables being updated, the logs reflect the parameter being changed. However, it's a bit of a dirty trick. LMK if you have an alternative suggestion.
<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20205). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20205). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks! |
transformers | 20,204 | closed | [MaskFormer] PoC of AutoBackbone API to support ResNet + Swin | # What does this PR do?
This PR adds support for more backbones than just Swin for the MaskFormer framework. The MaskFormer authors released checkpoints that leverage either ResNet or Swin as backbones, however we currently only support Swin. To support various backbones, this PR introduces the AutoBackbone API.
It introduces the following improvements:
- [x] adding AutoBackbone, ResNetBackbone
- [x] move MaskFormerSwin to its own modeling files and add MaskFormerSwinBackbone
- [x] make MaskFormer use the AutoBackbone API to leverage any backbone, including ResNet
## AutoBackbone API
The API is implemented as follows. For a given model, one should implement an additional class, `xxxBackbone`, for instance `ResNetBackbone`, in addition to the regular classes like `xxxModel` and `xxxForImageClassification`. The backbone class turns the `xxxModel` into a generic backbone to be consumed by a framework, like DETR or MaskFormer.
The API is inspired by the one used in [Detectron2](https://github.com/facebookresearch/detectron2/blob/main/detectron2/modeling/backbone/backbone.py). This means that any backbone should implement a `forward` and an `output_shape` method:
* the `forward` method returns the hidden states for each of the desired stages
* the `output_shape` method returns the channel dimension + strides for each of the desired stages.
There are additional methods like `size_divisibility` and `padding_constraints` which could be added in the future, for now they don't seem necessary.
## Usage
An example can be found below. Basically, the user can specify which layers/stages to get the feature maps from.
```
from transformers import ResNetConfig, ResNetBackbone
import torch
config = ResNetConfig(out_features=["stem", "stage1", "stage2", "stage3", "stage4"])
model = ResNetBackbone(config)
pixel_values = torch.randn(1, 3, 224, 224)
outputs = model(pixel_values)
for key, value in outputs.items():
print(key, value.shape)
```
which prints:
```
stem torch.Size([1, 64, 56, 56])
stage1 torch.Size([1, 256, 56, 56])
stage2 torch.Size([1, 512, 28, 28])
stage3 torch.Size([1, 1024, 14, 14])
stage4 torch.Size([1, 2048, 7, 7])
```
One can check the output specification as follows:
```
print(model.output_shape())
```
which prints:
```
{'stem': ShapeSpec(channels=64, height=None, width=None, stride=2), 'stage1': ShapeSpec(channels=256, height=None, width=None, stride=4), 'stage2': ShapeSpec(channels=512, height=None, width=None, stride=4), 'stage3': ShapeSpec(channels=1024, height=None, width=None, stride=4), 'stage4': ShapeSpec(channels=2048, height=None, width=None, stride=4)}
```
This is useful for frameworks, as they oftentimes require to know these things at initialization.
The Backbone API has a corresponding Auto class, which means that the following also works:
```
from transformers import ResNetConfig, AutoBackbone
config = ResNetConfig(out_features=["stem", "stage1", "stage2", "stage3", "stage4"])
model = AutoBackbone.from_config(config)
```
The AutoBackbone class also supports loading pre-trained weights, like so:
```
from transformers import AutoBackbone
backbone = AutoBackbone.from_pretrained("microsoft/resnet-50")
```
As the backbone also uses the same `base_model_prefix` like other head models.
## To do's
- [ ] Add tests for backbones. Backbone classes should not be tested with all tests defined in `test_modeling_common.py`, instead they should have separate tests. Here I'd like to discuss the best way to add these tests.
- [ ] make fixup is currently complaining about the following:
```
Exception: There were 2 failures:
MaskFormerSwinBackbone is defined in
transformers.models.maskformer.modeling_maskformer_swin but is not present in
any of the auto mapping. If that is intended behavior, add its name to
`IGNORE_NON_AUTO_CONFIGURED` in the file `utils/check_repo.py`.
ResNetBackbone is defined in transformers.models.resnet.modeling_resnet but is
not present in any of the auto mapping. If that is intended behavior, add its
name to `IGNORE_NON_AUTO_CONFIGURED` in the file `utils/check_repo.py`
```
=> however I added both MaskFormerSwinBackbone and ResNetBackbone to modeling_auto.py, so not sure why this fails. cc @sgugger
## MaskFormer specifics
MaskFormer supports both ResNet and Swin as backbone. It does support native ResNets, but it doesn't use the native Swin as backbone, which is why we have a separate `MaskFormerSwinModel` class in the library, as well as a `MaskFormerSwinBackbone` class in this PR.
Happy to discuss the design! | 11-14-2022 12:20:47 | 11-14-2022 12:20:47 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20204). All of your documentation changes will be reflected on that endpoint.<|||||>Closing this PR as it has been added in smaller separate PRs.<|||||>Thanks again for splitting it, it was really better this way! |
transformers | 20,203 | closed | update relative positional embedding | # What does this PR do?
The current `relative_key` positional embedding is incorrect when `use_cache=True`. This was first highlighted in #19045.
Reproducing script :
```pyton
import torch
from transformers import BertTokenizer, BertLMHeadModel, set_seed
tokenizer = BertTokenizer.from_pretrained("zhiheng-huang/bert-base-uncased-embedding-relative-key")
model = BertLMHeadModel.from_pretrained("zhiheng-huang/bert-base-uncased-embedding-relative-key", is_decoder = True)
inputs = tokenizer("No I'm not missing the ", return_tensors="pt")
input_ids = inputs.input_ids[:,:-1]
attention_mask = inputs.attention_mask[:,:-1]
with torch.no_grad():
model.config.use_cache = False
set_seed(0)
output = model(input_ids, attention_mask = attention_mask, use_cache =False)
print(output.logits[:,-1,:])
model.config.use_cache = True
output_1 = model(input_ids[:,:-1], use_cache = True, attention_mask = attention_mask[:,:-1])
pkv = output_1.past_key_values
output_2 = model(input_ids[:,-1:], past_key_values = pkv , use_cache = True)
print(output_2.logits[:,-1,:])
```
```
tensor([[-5.4971, -6.4888, -8.3359, ..., -7.3612, -5.5480, -0.9784]])
tensor([[ -7.2693, -7.7799, -10.0905, ..., -7.5183, -7.4255, -4.6804]])
```
Lets also make sure that this feature is tested using `create_and_check_decoder_model_past_large_inputs` | 11-14-2022 11:48:23 | 11-14-2022 11:48:23 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20203). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20203). All of your documentation changes will be reflected on that endpoint.<|||||>cc @sgugger just FYI, will ping you once I know that all the tests pass. <|||||>( before the modification the added test did not pass, now they do) <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20203). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,202 | closed | Downgrade log warning -> info | # What does this PR do?
Downgrades the logged messages about the size parameter being converted from int/tuple -> dict from warning to info.
## Fixes
Addresses part of the issue raised in #20185 - where many downstream tasks would have multiple warning messages, potentially scaring the users and spamming their logs.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 11-14-2022 10:16:10 | 11-14-2022 10:16:10 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20202). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you! |
transformers | 20,201 | closed | distributed training | why the training args n_gpu is set to 1 when use Trainer's distributed training๏ผ
which means only one device is allowed in a node๏ผ
and the training parameters printed is calculated with the n_gpus=1.
I want to know what should i do when every node has multi gpus. | 11-14-2022 09:36:03 | 11-14-2022 09:36:03 | Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,200 | closed | mark `test_save_load_fast_init_from_base` as `is_flaky` | # What does this PR do?
Mark `test_save_load_fast_init_from_base` as `is_flaky`.
- This test is known to be flaky, see #19849
- The level of flakiness seems to get higher after #20042
- **ran 5 times, and all passed**.
- **TODO**: check why #20042 makes this test more flaky. | 11-14-2022 09:25:01 | 11-14-2022 09:25:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I guess the best first step is to run it 1000 times (in a loop) and see how many times it fails (before and after your PR 20042).
Sometimes it is difficult to get reproducible failure if it is flaky.<|||||>As I commented before, this probably comes from models having weights initialized outside of the `_init_weights` function. A way to debug would be to drop somewhere which weight was randomly dropped when the test has failed (if it's printed for instance, we can look for it in the logs) |
transformers | 20,199 | closed | [docs] wrote i18n issue template | # What does this PR do?
Makes the issue template for translation- `i18n.md` available. Thank you @omarespejel https://github.com/huggingface/transformers/pull/17004
Some questions:
- Would you prefer a `.yml` format for templates? I can convert it if you wish, @sgugger.
- What labels should be added to the issue? `docs`, `i18n`, `help-wanted`, `WIP` perhaps?
Part of https://github.com/huggingface/transformers/issues/20183, an effort to update the translation guide.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Hello @sgugger, may you please review this PR? | 11-14-2022 08:30:39 | 11-14-2022 08:30:39 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20199). All of your documentation changes will be reflected on that endpoint.<|||||>> Thanks a lot for drafting this! I think the `WIP` label is fine, it would avoid the issue getting closed by the stale-bot.
Ok, adding it in. So resulting lables will be `WIP` only.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20199). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,198 | closed | Fix bug in segmentation postprocessing | # What does this PR do?
- Fixes bug in `MaskFormerFeatureExtractor.compute_segments()` that causes an error when label_ids_to_fuse is set to None during instance and panoptic segmentation post-processing.
- Adds test to check if target labels are fused correctly
Fixes #20132
## Before submitting
- [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)?
https://github.com/huggingface/transformers/issues/20132
| 11-14-2022 08:17:25 | 11-14-2022 08:17:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20198). All of your documentation changes will be reflected on that endpoint.<|||||>> Could you add a corresponding test for this?
Added a test to `test_feature_extraction_maskformer.py` to test label fusing, I think I covered all cases but the test might still be a bit flaky due to using dummy model outputs.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20198). All of your documentation changes will be reflected on that endpoint.<|||||>@NielsRogge I thought this issue was fixed with the ImageProcessor PR but the MaskFormer instance segmentation post-processing issue still persists. I updated the PR and added a test for label fusing. Could you take another look and approve if everything looks good?<|||||>> PR looks good to me, except that modeling_vit.py should not be changed
The changes are reverted, could you take another look? <|||||>> Thanks for fixing!
>
> However I'm still wondering how MaskFormer would solve instance segmentation datasets which have overlapping instances, like COCO
We have MaskFormer models trained on the COCO panoptic and ADE semantic datasets, other models don't have model cards yet. But I can test it if one of the other models is trained on an instance segmentation dataset.
|
transformers | 20,197 | closed | [docs] set overflowing image width to auto-scale | # What does this PR do?
The image shown below overflowed on small screens. A simple inline-css to set the same width as its div solved the problem.
before | after
------- | -------
 | 
Fixes #20196
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Hello @sgugger, may you please review this simple PR?
| 11-14-2022 07:40:23 | 11-14-2022 07:40:23 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20197). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for fixing this, it looks good to me!
@LysandreJik Do you mind double-checking?<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20197). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20197). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,196 | closed | [docs] index page has overflowing image | The current index page has an overflowing image for huggingface.co/support as shown below.

This can be fixed easily by adding `width=100%;` to the inline style.
Note: all current languages are affected. | 11-14-2022 07:33:12 | 11-14-2022 07:33:12 | |
transformers | 20,195 | closed | Wav2vec2 Pretraining issue | ### System Info
```shell
transformers version-4.11.3
```
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
https://colab.research.google.com/drive/1kepA7ryMG7YmNtSYjiJjBM984KRbpZuV#scrollTo=LdIxS2EEgMmz
### Expected behavior
```shell
I tried to run the pre training demo of wav2vec2 on libri speech but i run into this error
of unrecognised arguments:/ or no module found 'transformers.modeling_outputs
```
### Checklist
- [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [X] I checked if a related official extension example runs on my machine. | 11-14-2022 04:40:54 | 11-14-2022 04:40:54 | The script you are using probably requires a more recent version of Transformers. cc @sanchit-gandhi <|||||>As @sgugger has mentioned, you can relax the pinning constraints on all your HF libraries and install them from their latest versions:
```
!pip install transformers datasets accelerate jiwer
```
Also, you've got to be careful downgrading `torchaudio` like that as it can mess-up your torch installation. My recommendation would be to leave the default `torchaudio` version and update the Unix package `ffmpeg` to version 4:
```
!add-apt-repository -y ppa:jonathonf/ffmpeg-4
!apt update
!apt install -y ffmpeg
```
(taken from this blog https://huggingface.co/blog/fine-tune-whisper#prepare-environment)
<|||||>Hi @Kshitizkhandel :) - I can run it on `4.25.0.dev0` so it is a problem with a version.<|||||>> As @sgugger has mentioned, you can relax the pinning constraints on all your HF libraries and install them from their latest versions:
>
> ```
> !pip install transformers datasets accelerate jiwer
> ```
>
> Also, you've got to be careful downgrading `torchaudio` like that as it can mess-up your torch installation. My recommendation would be to leave the default `torchaudio` version and update the Unix package `ffmpeg` to version 4:
>
> ```
> !add-apt-repository -y ppa:jonathonf/ffmpeg-4
> !apt update
> !apt install -y ffmpeg
> ```
>
> (taken from this blog https://huggingface.co/blog/fine-tune-whisper#prepare-environment)
> As @sgugger has mentioned, you can relax the pinning constraints on all your HF libraries and install them from their latest versions:
>
> ```
> !pip install transformers datasets accelerate jiwer
> ```
>
> Also, you've got to be careful downgrading `torchaudio` like that as it can mess-up your torch installation. My recommendation would be to leave the default `torchaudio` version and update the Unix package `ffmpeg` to version 4:
>
> ```
> !add-apt-repository -y ppa:jonathonf/ffmpeg-4
> !apt update
> !apt install -y ffmpeg
> ```
>
> (taken from this blog https://huggingface.co/blog/fine-tune-whisper#prepare-environment)
Thank you for your swift response on this. @sanchit-gandhi I'm running out of disk space even on google colab pro, do you recommend using colab pro plus/ or anything that you suggest?<|||||>Hey @Kshitizkhandel. The full LibriSpeech dataset contains approx 1000h of labelled audio data. As such, it requires ~130GB disk space to download and prepare, which is why you're running into trouble on a Google Colab! Pre-training is an intensive task, both for compute and storage, so it's only really advised if you really need to do it. Otherwise, you can take a pre-trained checkpoint and fine-tune it for your downstream application (see https://huggingface.co/blog/fine-tune-wav2vec2-english).
May I ask what the purpose is for pre-training? Are you simply wanting to try it for yourself, or do you have the intent of pre-training on your own custom dataset?
The reason that I ask is that the 'official' Wav2Vec2 checkpoints are pre-trained on 60,000h of audio data. These checkpoints are available to you on the HF Hub:
- https://huggingface.co/facebook/wav2vec2-base
- https://huggingface.co/facebook/wav2vec2-large-lv60
-> so if you want a pre-trained checkpoint, you can skip the pre-training and load any of these pre-trained checkpoints
There are also multiple fine-tuned checkpoints that you can use out-of-the-box for downstream speech recognition:
- https://huggingface.co/facebook/wav2vec2-base-960h
- https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self
-> see the aforementioned blog for fine-tuning these checkpoints
Closing this issue as the original question has been solved! Feel free to ask on the forum if you require any additional assistance with your task and I'll be more than happy to help @Kshitizkhandel! https://discuss.huggingface.co
Best of luck!<|||||>> Hey @Kshitizkhandel. The full LibriSpeech dataset contains approx 1000h of labelled audio data. As such, it requires ~130GB disk space to download and prepare, which is why you're running into trouble on a Google Colab! Pre-training is an intensive task, both for compute and storage, so it's only really advised if you really need to do it. Otherwise, you can take a pre-trained checkpoint and fine-tune it for your downstream application (see https://huggingface.co/blog/fine-tune-wav2vec2-english).
>
> May I ask what the purpose is for pre-training? Are you simply wanting to try it for yourself, or do you have the intent of pre-training on your own custom dataset?
>
> The reason that I ask is that the 'official' Wav2Vec2 checkpoints are pre-trained on 60,000h of audio data. These checkpoints are available to you on the HF Hub:
>
> * https://huggingface.co/facebook/wav2vec2-base
> * https://huggingface.co/facebook/wav2vec2-large-lv60
>
> -> so if you want a pre-trained checkpoint, you can skip the pre-training and load any of these pre-trained checkpoints
>
> There are also multiple fine-tuned checkpoints that you can use out-of-the-box for downstream speech recognition:
>
> * https://huggingface.co/facebook/wav2vec2-base-960h
> * https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self
>
> -> see the aforementioned blog for fine-tuning these checkpoints
>
> Closing this issue as the original question has been solved! Feel free to ask on the forum if you require any additional assistance with your task and I'll be more than happy to help @Kshitizkhandel! https://discuss.huggingface.co
>
> Best of luck!
I've fine-tuned projects and was looking to try something new but given the hassles it doesn't seem to be a conducive option. Thanks for your help on this @sanchit-gandhi <|||||>Pre-training is certainly possible, you just need a lot of disk space and compute time for it to be worthwhile!
If you want to try something new, you can check out the Whisper model from OpenAI ๐ https://huggingface.co/blog/fine-tune-whisper This gets very good results with very little fine-tuning!<|||||>> Pre-training is certainly possible, you just need a lot of disk space and compute time for it to be worthwhile!
>
> If you want to try something new, you can check out the Whisper model from OpenAI ๐ https://huggingface.co/blog/fine-tune-whisper This gets very good results with very little fine-tuning!
Yeah, I read your lucid and well articulated blog on it. Great work! |
transformers | 20,194 | closed | Spelling Error in Testing Documentation - "checkt" -> "check" | ### System Info
- `transformers` version: 4.24.0.dev0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@sgugger @stevhliu
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Under `Run documentation tests` section of [Testing documentation](https://huggingface.co/docs/transformers/testing#run-documentation-tests) it says: `In order to test whether the documentation examples are correct, you should checkt that the doctests are passing.`
### Expected behavior
The line can be changed to: `In order to test whether the documentation examples are correct, you should check that the doctests are passing.` | 11-13-2022 22:23:12 | 11-13-2022 22:23:12 | Thanks for reporting, would you like to open a PR to fix this typo?<|||||>@sgugger sure I can open a PR to fix this type. |
transformers | 20,193 | closed | Urgent! Weird behavior of CLIPTokenizer when encoding out of vocabulary /non-English text with openai/clip-vit-base-patch32, and question about merges.txt. | ### System Info
**Environment is Google colab**
- `transformers` version: 4.24.0
- Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.15
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.12.1+cu113 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@patil-suraj I've seen you answering questions about CLIP before and would really appreciate your help. Any other's prompt help would be deeply appreciated because I'm in the middle of a project.
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
The easiest way to reproduce is to open:[this colab notebook](https://colab.research.google.com/drive/1TjXIAWItblSGKFaCND9dYr7qrHoD4R4G#scrollTo=6zDhnjx9HjW1)
Step 1:
Download and import tokenizer
```
!pip install transformers
!pip install datasets
!pip install -qq transformers ftfy
!pip install -qq "ipywidgets>=7,<8"
from transformers import CLIPTokenizer
!transformers-cli env
```
Step 2: Test how out-of-vocab tokens are actually tokenized
```
tokenizer = CLIPTokenizer.from_pretrained(
"openai/clip-vit-base-patch32",
)
print("tokenizer vocab size:",len(tokenizer.encoder))
print("tokenize known sequence:",tokenizer.tokenize(" . "))
print("tokenize unknow sequence:",tokenizer.decode(tokenizer("่ฐ ่ฐ")['input_ids']))
print("Unknown token should be:",tokenizer.unk_token)
print("But tokenize() maps the unknown tokens to the tokens and values below")
tokenizer.tokenize("่ฐ ่ฐ"),tokenizer.convert_tokens_to_ids(tokenizer.tokenize("่ฐ ่ฐ"))
```
Step 3: Test abnormal behavior of adding space inside
```
print("Inconsistent input and output: a space is added")
encoded=tokenizer.encode(tokenizer.tokenize("ยซรจ"))
print("Encoded:",encoded)#only has two values besides start and end tokens but decoded to 3 tokens?
print("Expected: <|startoftext|>ยซรจ <|endoftext|>. (no space between ยซ and รจ)\nActual output:",tokenizer.decode(encoded))
```
Step 4: Test how this leads to being able to encode and decode a non-English sentence back while it should become unknown tokens. It's not useful anyway because now the decoded sentence has spaces added inside words.
```
input="ยซรจ cosa ormai risaputa che a uno scapolo in possesso di un'ingente fortuna manchi soltanto una moglie. questa veritร รจ cosรญ radicata nella mente delle famiglie del luoho che, nel momento in cui un simile personaggio viene a far parte del vicinato, prima ancora di conoscere anche lontanamente i suoi desiderรฎ in proposito, viene immediatamente considerato come proprietร legittima di una o l'altra delle loro figlie.ยปorgoglio e pregiudizio รจ uno dei primi romanzi di jane austen. la scrittrice lo iniziรฒ a ventun anni; il libro, rifiutato da un editore londinese, rimase in un cassetto fino alla sua pubblicazione anonima nel 1813, e da allora รจ considerato tra i piรบ importanti romanzi della letteratura inglese. รจ la storia delle cinque sorelle bennet e dei loro corteggiatori, con al centro il romantico contrasto tra l'adorabile e capricciosa elizabeth e l'altezzoso darcy; lo spirito di osservazione implacabile e quasi cinico, lo studio arguto dei caratteri, la satira delle vanitร e delle debolezze della vita domestica, fanno di questo romanzo una delle piรบ efficaci e indimenticabili commedie di costume del periodo regency inglese."
output=tokenizer.decode(tokenizer.encode(input))
print(output[:15],output[-14:])
output=output[15:-14]#removing start token and end token
print('Input: ',input)
print('Output:',output)
```
### Expected behavior
I'm dealing with a text dataset that is multilingual and wish to tell which sentence is non-English by counting the percentage of unk_tokens after tokenizing.
Based on my understanding, tokenizer.encode(string) is equivalent to tokenizer.convert_tokens_to_ids(tokenizer.tokenize(string)) and should map tokens that are not in the vocab to tokenizer.unk_token. Also spaces are ignored during encoding and decoding will add spaces between tokens.
However this is not the case, tokenize() seems to map them to some token in merges.txt (https://huggingface.co/openai/clip-vit-base-patch32/resolve/main/merges.txt) and then map them into values according to vocab.json(https://huggingface.co/openai/clip-vit-base-patch32/resolve/main/vocab.json).
Sometimes this seems to separate the text further into sub-tokens according to merges.txt and thus add spaces between them when decoding.
This behavior is very annoying because it treats non-English text and English text in the same way, decodes the sentence back to its original form, but randomly adds spaces inside these non-English words.
Thanks very much for your patience and help. | 11-13-2022 19:03:45 | 11-13-2022 19:03:45 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,192 | closed | Typo on doctring in ElectraTokenizer | # What does this PR do?
Typo on docstring in [ElectraTokenizer](https://github.com/huggingface/transformers/blob/6cc06d17394f5715cdf2d13a1ef7680bedaee9e2/src/transformers/models/electra/tokenization_electra.py#L93).
This should be modified from **Bert** to **Electra** for readability.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-13-2022 17:44:59 | 11-13-2022 17:44:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20192). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for the guide when I was confused with the copy check. ๐
I add the part `,BERT->Electra` you said.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20192). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,191 | closed | fix convert longformer to onnx bug | class `LongformerOnnxConfig`
property `inputs`
miss one input `"token_type_ids"`
```shell
# before fix : onnx input node
name: "token_type_ids"
type {
tensor_type {
elem_type: 7
shape {
dim {
dim_value: 2
}
dim {
dim_value: 8
}
}
}
}
# after fix : onnx input node
name: "token_type_ids"
type {
tensor_type {
elem_type: 7
shape {
dim {
dim_param: "batch"
}
dim {
dim_param: "sequence"
}
}
}
}
``` | 11-13-2022 16:49:10 | 11-13-2022 16:49:10 | @deutschmn <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @lewtun <|||||>As far as I know, Longformer, like RoBERTa, doesn't use token_type_ids see #9111 and https://huggingface.co/docs/transformers/model_doc/longformer<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Gently pinging @fxmarty in case he has bandwidth to take a look<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20191). All of your documentation changes will be reflected on that endpoint.<|||||>I think
> As far as I know, Longformer, like RoBERTa, doesn't use token_type_ids see https://github.com/huggingface/transformers/issues/9111 and https://huggingface.co/docs/transformers/model_doc/longformer
is a good answer!
@SysuCharon could you provide a code that you are expecting to see working but which does not?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>not stale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,190 | closed | Add clip resources to the transformers documentation | # What does this PR do?
<!-- Remove if not applicable -->
Fixes #20055 (partially)
## Before submitting
- [x] This PR improves the docs of CLIP by adding common and most used resources
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #20055
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
## Who can review?
@stevhliu Please can you have a look?
| 11-13-2022 14:23:17 | 11-13-2022 14:23:17 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20190). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20190). All of your documentation changes will be reflected on that endpoint.<|||||>Folks, please make sure resources which are added are talking about the particular model.
|
transformers | 20,189 | closed | Hanging in TextClassificationPipeline's prediction | ### System Info
- `transformers` version: 4.22.2
- Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35
- Python version: 3.8.15
- Huggingface_hub version: 0.10.0
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@Narsil @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using this class. It hangs forever after deploying and sending a request to my server (using falcon and gunicorn) that calls the `predict` function. However, when I call it in a simple script, everything is ok, and its predictions are returned.
```python
class TrainStage:
def __init__(self, config):
self.config = config
def fit(self):
model_config = AutoConfig.from_pretrained(self.config.pretrained_model_path)
self.tokenizer = AutoTokenizer.from_pretrained(self.config.pretrained_model_path)
self.model = AutoModelForSequenceClassification.from_pretrained(
self.config.pretrained_model_path,
config=model_config
)
training_args = TrainingArguments(...)
trainer = Trainer(...)
trainer.train()
def transform(self, texts: List[str]):
pipeline = TextClassificationPipeline(model=self.model, tokenizer=self.tokenizer)
results = pipeline(texts)
return results
```
### Expected behavior
Returning predictions | 11-13-2022 11:53:52 | 11-13-2022 11:53:52 | @amiralikaboli Can you try setting `TOKENIZERS_PARALLELISM=0` before calling your script ?
You might be triggerring: https://github.com/huggingface/transformers/issues/5486
Basically `tokenizers` does parallelism by default, but it can be messed up by other sources of parallelism, most cases are handled, but maybe you found a way to trigger the deadlock.
Using that might help at least make sure this is not the issue.
And if the problem is still there, could you provide a simple reproducing script ?<|||||>@Narsil Actually, It doesn't work. The script I use for training and predicting my model is similar to the above script. Do you want a script for a server/client which runs the model?<|||||>If you could provide a simple (one file) script that's easily launchable to reproduce that'd be perfect yes.
The bug you're encountering is almost certainly a deadlock linked to multiple libs doing parallelism in some way hurting each other. Without the full script to reproduce it's hard to pinpoint though. Also are you on Mac or Linux (there's different default behavior for forking if my memory serves correctly).<|||||>You are right about the deadlock. We loaded our model several times as a temporary solution, and it worked. But, it is not ideal because of more resource usage.
Providing you with a simple script exactly based on our case is not possible since we used a private library over falcon.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,188 | closed | Fix a typo in examples/pytorch/translation/README.md | # What does this PR do?
There is typo in the original hyperlink.
Below is the original version:
```markdown
Based on the script [`run_translation_no_trainer.py`] (https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/ **run_translationn_no_trainer.py**)
```
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-13-2022 10:46:53 | 11-13-2022 10:46:53 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20188). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,187 | closed | How to train my own dataset? | ### Feature request
Could you privide a tutorial for training the whole model using my own dataset? I am a new learner and very confused. Thanks!
### Motivation
Could you privide a tutorial for training the whole model using my own dataset? I am a new learner and very confused. Thanks!
### Your contribution
Could you privide a tutorial for training the whole model using my own dataset? I am a new learner and very confused. Thanks! | 11-13-2022 07:20:50 | 11-13-2022 07:20:50 | You could start by checking out the [course](https://huggingface.co/course/chapter1/1). Also please use the [forums](https://discuss.huggingface.co/) for questions like this, since we keep features for bugs and feature requests only.
You could start by checking out the [course](https://huggingface.co/course/chapter1/1). Also please use the [forums](https://discuss.huggingface.co/) for questions like this, since we keep features for bugs and feature requests only.
You could start by checking out the [course](https://huggingface.co/course/chapter1/1). Also please use the [forums](https://discuss.huggingface.co/) for questions like this, since we keep features for bugs and feature requests only. |
transformers | 20,186 | closed | Weird bugs when using ```run_image_classification.py``` | ### System Info
Dear authors,
Thanks for your great work. I had a small dataset as I share with you here: https://drive.google.com/drive/folders/1-GAFdH0S16SlYXPdOwDmqijCrpIeeLbm?usp=sharing.
When I trained a ViT for image classification from ```run_image_classification.py```, my command as I attached:
```
!CUDA_DIVISIBLE_DEVICES=0, python3 run_image_classification.py \
--train_dir $TRAIN_DIR \
--output_dir $OUTPUT_DIR \
--remove_unused_columns False \
--do_train \
--do_eval \
--learning_rate 2e-5 \
--num_train_epochs 10 \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 32 \
--logging_strategy steps \
--logging_steps 10 \
--evaluation_strategy epoch \
--save_strategy epoch \
--load_best_model_at_end True \
--save_total_limit 3 \
--seed 1337 \
--overwrite_output_dir
```
I got an error:
```
Traceback (most recent call last):
File "run_image_classification.py", line 388, in <module>
main()
File "run_image_classification.py", line 240, in main
task="image-classification",
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1681, in load_dataset
**config_kwargs,
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1453, in load_dataset_builder
data_files=data_files,
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1089, in dataset_module_factory
download_mode=download_mode,
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 701, in get_module
base_path=base_path,
File "/usr/local/lib/python3.7/dist-packages/datasets/data_files.py", line 801, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/usr/local/lib/python3.7/dist-packages/datasets/data_files.py", line 763, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/usr/local/lib/python3.7/dist-packages/datasets/data_files.py", line 368, in resolve_patterns_locally_or_by_urls
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to resolve any data file that matches '['/content/drive/MyDrive/EOS/OVO/AF/**']' at /content/drive/MyDrive/EOS/codes
```
while if I do:
```
os.listdir("/content/drive/MyDrive/EOS/OVO/AF/")
```
I got:
```
['Altered_material', 'Free_crystal', 'Lithic', 'Juvenile']
```
And all the images are still there. May I ask if you have any advice?
Thanks! I look forward hearing you asap!
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
As above
### Expected behavior
As above | 11-13-2022 04:48:26 | 11-13-2022 04:48:26 | Hi,
Could you check whether [this part of the code](https://github.com/huggingface/transformers/blob/2308f3d42cff281cecee413f97f19044f54636d7/examples/pytorch/image-classification/run_image_classification.py#L231-L241) runs fine?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,185 | closed | Difficult to understand warning | ### System Info
- `transformers` version: 4.25.0.dev0
- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- Huggingface_hub version: 0.11.0.dev0
- PyTorch version (GPU?): 1.11.0+cpu (False)
- Tensorflow version (GPU?): 2.9.1 (False)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu)
- Jax version: 0.3.16
- JaxLib version: 0.3.15
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@amyeroberts @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import CLIPImageProcessor
processor = CLIPImageProcessor.from_pretrained("openai/clip-vit-large-patch14")
```
=> this throws the following warning:
```
The size parameter should be a dictionary with keys ('height', 'width'), ('shortest_edge', 'longest_edge') or ('shortest_edge',) got 224. Setting as {'shortest_edge': 224}.
The size parameter should be a dictionary with keys ('height', 'width'), ('shortest_edge', 'longest_edge') or ('shortest_edge',) got 224. Setting as {'height': 224, 'width': 224}.
```
### Expected behavior
I'm wondering whether it would be better to for now downgrade the warning here: https://github.com/huggingface/transformers/blob/6cc06d17394f5715cdf2d13a1ef7680bedaee9e2/src/transformers/image_processing_utils.py#L500 to just `logger.info(...)`
The `openai/clip-vit-large-patch14` model is used **a lot** in downstream applications (CompVis, diffusers) and the thrown warning here is maybe a bit too much and scares the user?
Or should we aggressively update all the configs of CLIP models on the Hub in order to make the warning go away we would have to update the config online of a lot of clip models..
| 11-12-2022 21:35:33 | 11-12-2022 21:35:33 | Updating the config online would make them incompatible with previous versions of Transformers so that's not a possibility. I'm fine with downgrading the warnings to infos.<|||||>I wonder why that warning is being printed twice, one time for "shortest_edge" and one time for "height, width"
<|||||>I'll open a PR to downgrade the warnings. I agree the message isn't very clear either - I'll open up another PR to tidy this up.
@NielsRogge This is likely because there's two parameters `size` and `crop_size` that are being converted to a dict - makes the point about the warnings being unclear quite well! |
transformers | 20,184 | closed | New logging support to "Trainer" Class (ClearML Logger) | I have added a ClearML callback class to log experiments using `ClearML Task.` ClearML logger logs everything to ClearML WebUI. ClearML logs Hyperparameters, Scalars, Models, Checkpoints, and other necessary artifacts. I request @sgugger to review this pull request as it works with the `Trainer` class. The `integrations.py` contains the major contents of this pull request through the `ClearMLCallback` class. Some other small changes have been made to another set of files to maintain the integrity of the codebase, such as adding simple basic tests.
> ClearML is a leading MLOps stack that can supercharge HuggingFace Transformers training and tracking with its state-of-the-art experiment tracking capability. ClearML: https://clear.ml/
**What ClearML Experiment Manager can log? Everything! You just name it. Example Screenshots:**








###
**Example script to utilize ClearML Callback with Trainer:**
[trainer_with_clearml.zip](https://github.com/huggingface/transformers/files/9995390/trainer_with_clearml.zip)
| 11-12-2022 16:17:55 | 11-12-2022 16:17:55 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20184). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20184). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20184). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20184). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20184). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger , We have made the changes according to your suggestions. Please have a look. Thank you for your co-operation.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20184). All of your documentation changes will be reflected on that endpoint.<|||||>I'm finding this integration very aggressive.
Basically, it seems to require that if the clearml python package is installed, you must have clearml set up in order to use huggingface.
We have optional clearml setup in our environment (with a private clearml server), but not using it in most situations at the moment.
This now started crashing with
```
File /usr/local/lib/python3.8/dist-packages/clearml/backend_api/session/session.py:180, in Session.__init__(self, worker, api_key, secret_key, host, logger, verbose, initialize_logging, config, http_retries_config, **kwargs)
177 raise ValueError("ClearML host was not set, check your configuration file or environment variable")
179 if not self._offline_mode and (not self.secret_key and not self.access_key and not self.__auth_token):
--> 180 raise MissingConfigError()
182 self._ssl_error_count_verbosity = self.config.get(
183 "api.ssl_error_count_verbosity", self._ssl_error_count_verbosity)
185 self.__host = host.strip("/")
MissingConfigError: It seems ClearML is not configured on this machine!
To get started with ClearML, setup your own 'clearml-server' or create a free account at https://app.clear.ml/
Setup instructions can be found here: https://clear.ml/docs
```
EDIT: Now I realize that I missed that `report_to` in fact defaults to `all` and that's gist of the issue. The wording of the ClearML documentation implies quite a bit that it needs to be added to `report_to` manually, though, which confused me.<|||||>You should have seen a log informing you that `report_to` was defaulting to all tools installed (issued [here](https://github.com/huggingface/transformers/blob/3830b3f74a57b3bcf6f14016c483fb0bb14b01ce/src/transformers/training_args.py#L1218)). We should probably change it to a warning so that it's more visible, but the default will change in v5 of Transformers (from all to None). |
transformers | 20,183 | closed | [docs] translating.md needs an update | Dear ๐ค HuggingFace Team,
I would like to suggest updating the guide for issues in the following sections:
- ๐๏ธ Open an issue
- [x] As of now, there is no template called `Translation template` while opening an issue. Instead I had to copy-paste issues made by the community. https://github.com/huggingface/transformers/pull/20199
- [ ] To view the translated documentation in the _first_ PR for a new language, one should first update line 18 of `build_documentation.yml` and line 17 of `build_pr_documentation.yml` from `.github/workflows` to include their language code. This should be expained in the guide, ideally in a separate section called `Open a pull request`.
- ๐ Copy-paste the English version with a new language code
- [ ] It should be made clear that this step is only needed for the first PR, and subsequent PRs can get the documents they want translated from `en` separately, with updates to `_toctree.yml` done in tandem.
- ๐ etc.
- [ ] For non-alphabet languages, translators must put in a custom link as shown [here](https://github.com/huggingface/doc-builder#writing-documentation-for-hugging-face-libraries:~:text=a%20way%20to%20customize%20the%20anchor%20link) (screenshot included below). <hr/>

As a possible solution we could:
- refactor the document to 2 sections: `New languages` and `In-progress languages`
- add more content regarding how the first pull request should be done
- and account for non-alphabet languages by providing a small sidenote
Thank you so much for this wonderful library. | 11-12-2022 13:53:15 | 11-12-2022 13:53:15 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry for the drag in progress, but may you please put the `WIP` tag to this issue as well? Thank you @sgugger.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am working on this issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,182 | closed | Make tokenizer.pad() compatible with labels | ### Feature request
In my project, I need to do some custom sorting and filtering in my sampler. Therefore, each item in my Dataset is already tokenized/truncated (and not padded). So the padding occurs in the `collate_fn` of the dataloader. Here I would need to pad both inputs and labels. While this is straightforward to do with `tokenizer.pad()` for the inputs (input_ids, attention_mask) I cannot get this to work for labels.
This first example just tries to put two samples (dixtionaries with keys input_ids, attention_mask, and labels) through tokenizer.pad. I would not expect this to work because in seq2seq, inputs and ouputs can have different lengths so the total max. sequence length will differ.
```
import torch
from transformers import MBartTokenizer
samples = [{'input_ids': torch.LongTensor([8622, 621, 5941, 2750, 765, 10, 10422, 111, 104687,
27771, 4, 92136, 538, 100244, 3642, 8966, 85493, 4,
53927, 621, 24911, 5245, 552, 49725, 4, 398, 621,
1053, 26255, 13081, 34, 59207, 4, 87, 73275, 13,
110, 2661, 9, 36904, 297, 65842, 7, 5, 2,
250004]),
'attention_mask': torch.LongTensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]),
'labels': torch.LongTensor([250132, 250147, 250122, 250132, 250092, 5941, 250030, 250148, 250132,
10422, 250104, 250031, 250132, 104687, 27771, 250057, 250135, 250132,
250057, 250057, 250057, 250057, 2, 250004])},
{'input_ids': torch.LongTensor([20625, 32692, 34475, 1821, 5792, 5941, 182417, 32, 4,
136201, 4, 201, 1530, 4, 7911, 2, 250004]),
'attention_mask': torch.LongTensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]),
'labels': torch.LongTensor([250132, 250147, 250122, 250132, 8337, 250104, 250030, 250132, 32692,
250057, 250031, 250132, 199, 1681, 250031, 250148, 250132, 765,
250132, 250145, 250057, 250057, 250123, 250132, 136, 250072, 136201,
250073, 201, 1530, 250074, 7911, 250057, 250057, 2, 250004])}]
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-cc25")
padded = tokenizer.pad(samples,
padding=True,
return_tensors="pt")
```
Error:
> Traceback (most recent call last):
> File "transformers\tokenization_utils_base.py", line 715, in convert_to_tensors
> tensor = as_tensor(value)
> ValueError: expected sequence of length 24 at dim 1 (got 36)
>
> During handling of the above exception, another exception occurred:
>
> Traceback (most recent call last):
> File "scratch_2.py", line 43, in <module>
> padded = tokenizer.pad(samples,
> File "transformers\tokenization_utils_base.py", line 2985, in pad
> return BatchEncoding(batch_outputs, tensor_type=return_tensors)
> File "transformers\tokenization_utils_base.py", line 210, in __init__
> self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
> File "transformers\tokenization_utils_base.py", line 731, in convert_to_tensors
> raise ValueError(
> ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected).
So I figured I would just split up my data into the inputs and the labels, but that also does not work:
```python
import torch
from transformers import MBartTokenizer
samples = [{'input_ids': torch.LongTensor([8622, 621, 5941, 2750, 765, 10, 10422, 111, 104687,
27771, 4, 92136, 538, 100244, 3642, 8966, 85493, 4,
53927, 621, 24911, 5245, 552, 49725, 4, 398, 621,
1053, 26255, 13081, 34, 59207, 4, 87, 73275, 13,
110, 2661, 9, 36904, 297, 65842, 7, 5, 2,
250004]),
'attention_mask': torch.LongTensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]),
'labels': torch.LongTensor([250132, 250147, 250122, 250132, 250092, 5941, 250030, 250148, 250132,
10422, 250104, 250031, 250132, 104687, 27771, 250057, 250135, 250132,
250057, 250057, 250057, 250057, 2, 250004])},
{'input_ids': torch.LongTensor([20625, 32692, 34475, 1821, 5792, 5941, 182417, 32, 4,
136201, 4, 201, 1530, 4, 7911, 2, 250004]),
'attention_mask': torch.LongTensor([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]),
'labels': torch.LongTensor([250132, 250147, 250122, 250132, 8337, 250104, 250030, 250132, 32692,
250057, 250031, 250132, 199, 1681, 250031, 250148, 250132, 765,
250132, 250145, 250057, 250057, 250123, 250132, 136, 250072, 136201,
250073, 201, 1530, 250074, 7911, 250057, 250057, 2, 250004])}]
tokenizer = MBartTokenizer.from_pretrained("facebook/mbart-large-cc25")
inputs = [{k: v for k, v in sample.items() if k in ["attention_mask", "input_ids"]} for sample in samples]
padded_inputs = tokenizer.pad(inputs,
padding=True,
return_tensors="pt")
labels = [{"labels": sample["labels"]} for sample in samples]
padded_labels = tokenizer.pad(labels,
padding=True,
return_tensors="pt")
```
Error:
> Traceback (most recent call last):
> File "scratch_2.py", line 49, in <module>
> padded_labels = tokenizer.pad(labels,
> File "transformers\tokenization_utils_base.py", line 2904, in pad
> raise ValueError(
> ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['labels']
My assumption is that the intention for `pad` has always been to only pad inputs and not labels but I might just be missing something.
### Motivation
Unless I am missing something, it is currently not straightforward to use tokenizer.pad() on labels.
### Your contribution
I'd need some guidance on what needs to change to make a tokenizer's `.pad` work with labels. If I know what to change, I can make a PR. | 11-12-2022 13:01:03 | 11-12-2022 13:01:03 | There is no canonical way to pad labels directly using the tokenizer, since what we want for padding depends on the task (we don't want any padding in sentence classification, but a different one in token classification, summarization or translation) and the tokenizer is not aware of the task.
In your code sample, the easiest way is just to replace the name `"labels"` by `"input_ids"` in your call to pad. As shown in all examples, the canonical way is to just do everything in `__call__`, or you can use the data collators to help as well, since they contain the code to pad adapted to the task at hand.<|||||>That makes a lot of sense! I've been spending too much time on translation-related topics that I forgot that this is not as straightforward as it seems. Perhaps in a perfect world, the tokenizer (+ model) would be task-aware. Just like some tokenizers can switch between a source/target language, they could then switch between tasks and therefore appropriate padding techniques. Maybe that would make automatic integrations with pipelines easy, too! This is just me dreaming - I am aware that there are probably billions of reasons not to implement it like that. :-) |
transformers | 20,181 | closed | translate zh quicktour(#20095) | # What does this PR do?
Translate the quicktour to zh
#20095 | 11-12-2022 09:18:20 | 11-12-2022 09:18:20 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I have add them to doc build workflow. @sgugger |
transformers | 20,180 | closed | [i18n-KO] Translated index page to Korean | # What does this PR do?
Translated the `Getting started - index.mdx` file of the documentation to Korean. I use transformers for my side-project and wanted to contribute. As a beginner, this seemed appropriate.
Some questions:
- I can't access the pr documentation endpoint for Korean. Is there a missing step?
- I left all the other files alone after copying the English docs. Is this acceptable for future updates or should I add contents to the `_toctree.yml` only when translation is complete? Should I delete all other files for a cleaner review?
- In Korea, people use both `Transformer` and `ํธ๋์คํฌ๋จธ` to describe the model. However, as this is more of a product of HuggingFace I opted not to translate it. May you please let me know which you would prefer?
Thank you in advance for your review.
<!-- Remove if not applicable -->
Part of https://github.com/huggingface/transformers/issues/20179
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger, may you please review this PR?
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-12-2022 08:53:30 | 11-12-2022 08:53:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.<|||||>Hey! ๋ง๋์ ๋ฐ๊ฐ์์! Would love to hear more about the project you were working on and how transformers is used in Korea! ๐ค
Thanks a lot for your work!
- The PR documentation endpoint was updating and should now be visible! The `KO` is not appearing because I think you did not add the file to the `toctree` indeed.
- You should delete all the other files! We don't keep template files and if someone wants to translate for futur updates, they will just copy the single file ๐
- I think you did a great job in keeping `Transformer` as it is indeed more the name of the library! When talking about actual `transformers` architecture, its up to you, and depends on what is more used/understandable!
I will be glad to review once you have cleaned up the modified files! ๐๐ป Also cc @eunseojo just FYI ๐ <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.<|||||>Hello @ArthurZucker, ์ ๋ ๋ง๋์ ๋ฐ๊ฐ์์! Thank you so much for your warm welcome and guidance.
Thank you for your help with this PR, @eunseojo ๐ค
- I made updates to `_toctree.yml` as you suggested and also added a language code to `.github/workflows/build_*_documentation`.
- So much simpler now! I translated the big section titles as they were mentioned in the `index.mdx` page. This raised errors from the pr docs workflow saying sections were empty,

so I added a placeholder called `in_translation.mdx`. As a result, the left sidebar is a zebra now ๐ฆ ๐คฃ .

How would you solve this problem?
- Yay, ๐งจ thank you very much for your compliment hehe
P.S.
I'm just a beginner but Korea, at least a community I learn lots from- [PseudoLab](https://pseudo-lab.com/), is extensively using huggingface. The government funds a yearly hackathon called [OSSCA](https://www.contribution.ac/), kinda like hacktoberfest but with dedicated mentors. RustPython and i18n-Kubernetes team got the Gold and Silver for this year. Our team AzureSDK got Bronze with 5 others.
My side-project goal is to enhance "google translator-ish" results: **to a more friendly, neighbourhood tone** in Korean (_grammar checking included_!) I'm still tinkering, just learning mostly how to get this to work. I have another idea for generating children's books, but for a later time perhaps. Thank you for your interest!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20180). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for working on this! Excited to see a new doc in Korean! |
transformers | 20,179 | open | ๐ [i18n-KO] Translating docs to Korean | Hi!
Let's bring the documentation to all the Korean-speaking community ๐ (currently 9 out of 77 complete)
Would you want to translate? Please follow the ๐ค [TRANSLATING guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md). Here is a list of the files ready for translation. Let us know in this issue if you'd like to translate any, and we'll add your name to the list.
Some notes:
* Please translate using an informal tone (imagine you are talking with a friend about transformers ๐ค).
* Please translate in a gender-neutral way.
* Add your translations to the folder called `ko` inside the [source folder](https://github.com/huggingface/transformers/tree/main/docs/source).
* Register your translation in `ko/_toctree.yml`; please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml).
* Once you're finished, open a pull request and tag this issue by including #issue-number in the description, where issue-number is the number of this issue. Please ping @ArthurZucker, @sgugger and @eunseojo for review.
* ๐ If you'd like others to help you with the translation, you can also post in the ๐ค [forums](https://discuss.huggingface.co/).
* With the [HuggingFace Documentation l10n](https://pseudo-lab.com/HuggingFace-0558662add4949558f6b4c4d526547da) initiative of [Pseudo Lab](https://pseudo-lab.com/), full translation will be done even faster. ๐ Please give us your support! Cheers to our team ๐@0525hhgus, @KIHOON71, @gabrielwithappy, @jungnerd, @sim-so, @HanNayeoniee, @wonhyeongseo
์๋
ํ์ธ์!
ํ๊ตญ์ด๋ฅผ ์ฌ์ฉํ๋ ๋ชจ๋๊ฐ ๊ธฐ์ ๋ฌธ์๋ฅผ ์ฝ์ ์ ์๊ฒ ํด๋ณด์์ ๐ (ํ์ฌ 77๊ฐ ๋ฌธ์ ์ค 9๊ฐ ์๋ฃ)
๋ฒ์ญ์ ์ฐธ์ฌํ๊ณ ์ถ์ผ์ ๊ฐ์? ๐ค [๋ฒ์ญ ๊ฐ์ด๋](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md)๋ฅผ ๋จผ์ ์ฝ์ด๋ณด์๊ธฐ ๋ฐ๋๋๋ค. ๋ ๋ถ๋ถ์ ๋ฒ์ญํด์ผํ ํ์ผ๋ค์ด ๋์ด๋์ด ์์ต๋๋ค. ์์
ํ๊ณ ๊ณ์ ํ์ผ์ด ์๋ค๋ฉด ์ฌ๊ธฐ์ ๊ฐ๋จํ ์๋ ค์ฃผ์ธ์. ์ค๋ณต๋์ง ์๋๋ก `์์
์ค`์ผ๋ก ํ์ํด๋๊ฒ์.
์ฐธ๊ณ ์ฌํญ:
* ๊ธฐ์ ๋ฌธ์์ด์ง๋ง (์น๊ตฌ์๊ฒ ์ค๋ช
๋ฃ๋ฏ์ด) ์ฝ๊ฒ ์ฝํ๋ฉด ์ข๊ฒ ์ต๋๋ค. __์กด๋๋ง__ ๋ก ์จ์ฃผ์๋ฉด ๊ฐ์ฌํ๊ฒ ์ต๋๋ค.
* ์ฑ๋ณ์ ์ผ๋ถ ์ธ์ด(์คํ์ธ์ด, ํ๋์ค์ด ๋ฑ)์๋ง ์ ์ฉ๋๋ ์ฌํญ์ผ๋ก, ํ๊ตญ์ด์ ๊ฒฝ์ฐ ๋ฒ์ญ๊ธฐ๋ฅผ ์ฌ์ฉํ์ ํ ๋ฌธ์ฅ ๊ธฐํธ์ ์กฐ์ฌ ๋ฑ์ด ์๋ง๋์ง ํ์ธํด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค.
* [์์ค ํด๋](https://github.com/huggingface/transformers/tree/main/docs/source) ์๋ `ko` ํด๋์ ๋ฒ์ญ๋ณธ์ ๋ฃ์ด์ฃผ์ธ์.
* ๋ชฉ์ฐจ(`ko/_toctree.yml`)๋ ํจ๊ป ์
๋ฐ์ดํธํด์ฃผ์ธ์. [์์ด ๋ชฉ์ฐจ](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml)์ ์์๊ฐ ๋์ผํด์ผ ํฉ๋๋ค.
* ๋ชจ๋ ๋ง์น์
จ๋ค๋ฉด, ๊ธฐ๋ก์ด ์ํํ๋๋ก PR์ ์ฌ์ค ๋ ํ์ฌ ์ด์(`#20179`)๋ฅผ ๋ด์ฉ์ ๋ฃ์ด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค. ๋ฆฌ๋ทฐ ์์ฒญ์ @ArthurZucker๋, @sgugger๋, @eunseojo๋๊ป ์์ฒญํด์ฃผ์ธ์.
* ๐ ์ปค๋ฎค๋ํฐ์ ๋ง์๊ป ํ๋ณดํด์ฃผ์๊ธฐ ๋ฐ๋๋๋ค! ๐ค [ํฌ๋ผ](https://discuss.huggingface.co/)์ ์ฌ๋ฆฌ์
๋ ์ข์์.
* [๊ฐ์ง์ฐ๊ตฌ์](https://pseudo-lab.com/)์ [์ด๋์
ํฐ๋ธ](https://pseudo-lab.com/HuggingFace-0558662add4949558f6b4c4d526547da)๋ก ๋ฒ์ญ์ด ๋์ฑ ๋น ๋ฅด๊ฒ ์งํ๋ ์์ ์
๋๋ค. ๐ ๋ง์ ์์ ๋ถํ๋๋ ค์! ์ฐ๋ฆฌํ ํ์ดํ
๐
@0525hhgus, @KIHOON71, @gabrielwithappy, @jungnerd, @sim-so, @HanNayeoniee, @wonhyeongseo
## GET STARTED
- [x] ๐ค Transformers https://github.com/huggingface/transformers/pull/20180
- [x] Quick tour https://github.com/huggingface/transformers/pull/20946
- [x] Installation https://github.com/huggingface/transformers/pull/20948
## TUTORIAL
- [x] Pipelines for inference https://github.com/huggingface/transformers/pull/22508
- [x] Load pretrained instances with an AutoClass https://github.com/huggingface/transformers/pull/22533
- [x] Preprocess https://github.com/huggingface/transformers/pull/22578
- [x] Fine-tune a pretrained model https://github.com/huggingface/transformers/pull/22670
- [x] Distributed training with ๐ค Accelerate https://github.com/huggingface/transformers/pull/22830
- [ ] Share a model
## HOW-TO GUIDES
### GENERAL USAGE
- [x] Create a custom architecture https://github.com/huggingface/transformers/pull/22754
- [x] Sharing custom models https://github.com/huggingface/transformers/pull/22534
- [x] Train with a script https://github.com/huggingface/transformers/pull/22793
- [x] Run training on Amazon SageMaker https://github.com/huggingface/transformers/pull/22509
- [ ] Converting from TensorFlow checkpoints
- [x] Export to ONNX https://github.com/huggingface/transformers/pull/22806
- [ ] Export to TorchScript
- [ ] Troubleshoot
### NATURAL LANGUAGE PROCESSING
- [x] Use tokenizers from ๐ค Tokenizers https://github.com/huggingface/transformers/pull/22956
- [ ] Inference for multilingual models
- [ ] Text generation strategies
#### TASK GUIDES
- [x] Text classification https://github.com/huggingface/transformers/pull/22655
- [x] Token classification https://github.com/huggingface/transformers/pull/22945
- [ ] Question answering
- [ ] Causal language modeling
- [x] Masked language modeling https://github.com/huggingface/transformers/pull/22838
- [x] Translation https://github.com/huggingface/transformers/pull/22805
- [x] Summarization https://github.com/huggingface/transformers/pull/22783
- [ ] Multiple choice
### AUDIO
- [ ] Audio classification
- [ ] Automatic speech recognition
### COMPUTER VISION
- [ ] Image classification
- [ ] Semantic segmentation
- [ ] Video classification
- [ ] Object detection
- [ ] Zero-shot object detection
- [ ] Zero-shot image classification
- [ ] Depth estimation
### MULTIMODAL
- [x] Image captioning https://github.com/huggingface/transformers/pull/22943
- [ ] Document Question Answering
### PERFORMANCE AND SCALABILITY
- [ ] Overview
- [ ] Training on one GPU
- [ ] Training on many GPUs
- [ ] Training on CPU
- [ ] Training on many CPUs
- [ ] Training on TPUs
- [ ] Training on TPU with TensorFlow
- [ ] Training on Specialized Hardware
- [ ] Inference on CPU
- [ ] Inference on one GPU
- [ ] Inference on many GPUs
- [ ] Inference on Specialized Hardware
- [ ] Custom hardware for training
- [ ] Instantiating a big model
- [ ] Debugging
- [ ] Hyperparameter Search using Trainer API
- [ ] XLA Integration for TensorFlow Models
### CONTRIBUTE
- [ ] How to contribute to transformers?
- [ ] How to add a model to ๐ค Transformers?
- [ ] How to convert a ๐ค Transformers model to TensorFlow?
- [ ] How to add a pipeline to ๐ค Transformers?
- [ ] Testing
- [ ] Checks on a Pull Request
- [ ] ๐ค Transformers Notebooks
- [ ] Community resources
- [ ] Benchmarks
- [ ] Migrating from previous packages
### CONCEPTUAL GUIDES
- [ ] Philosophy
- [ ] Glossary
- [ ] What ๐ค Transformers can do
- [ ] How ๐ค Transformers solve tasks
- [ ] The Transformer model family
- [ ] Summary of the tokenizers
- [ ] Attention mechanisms
- [ ] Padding and truncation
- [ ] BERTology
- [ ] Perplexity of fixed-length models
- [ ] Pipelines for webserver inference
<details>
<summary>
## Other relevant PRs along the way
</summary>
- Enable easy Table of Contents editing https://github.com/huggingface/transformers/pull/22581
- Added forgotten internal English anchors for `sagemaker.mdx` https://github.com/huggingface/transformers/pull/22549
- Fixed anchor links for `auto_class`, `training` https://github.com/huggingface/transformers/pull/22796
- Update ToC from upstream https://github.com/huggingface/transformers/pull/23112
</details> | 11-12-2022 06:55:39 | 11-12-2022 06:55:39 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello @sgugger, may you please add the `WIP` tag to this issue? Thank you so much.<|||||>For contributors and PseudoLab team members, please see a PR template [gist](https://gist.github.com/wonhyeongseo/af2a8855264bb494212f81e8b8173b9a) ([raw](https://gist.githubusercontent.com/wonhyeongseo/af2a8855264bb494212f81e8b8173b9a/raw/4a50fca630c4f6188cea658bdba98704d9a3e979/pr-template.md)) that could ease your first PR experience.
@0525hhgus, @KIHOON71, @gabrielwithappy, @jungnerd, @sim-so, @HanNayeoniee, @wonhyeongseo<|||||>Dear @sgugger, would you add `document` label to this issue?
I think other issues for the translation have a `document` label.
Thank you in advance
@wonhyeongseo
I changed my PR with a new PR template. would you change
`Load pretrained instances with an AutoClass` to [[WIP]๐[i18n-KO] Translate autoclass_tutorial to Korean and Fix the typo of quicktour #22533](https://github.com/huggingface/transformers/pull/22533)
<|||||>@sgugger wow! Thank you a million! :-)<|||||>@sgugger
Dear HuggingFace Team,
I hope you are doing well. My name is Wonhyeong Seo from the [Pseudo Lab](https://linkedin.com/company/pseudolab) team. As you may know, we are actively working on localizing the `huggingface/transformers` repository documentation into Korean. Our goal is to make this valuable resource more accessible to Korean-speaking users, thereby promoting the development of NLP and machine learning in Korea and beyond.
We are currently in the process of applying for [government sponsorship](https://www.oss.kr/notice/show/eac6d5c8-01c1-4cc0-b2d1-1a88e96942e2?page=1) to support our localization efforts. To strengthen our application, we kindly request your permission to use the documentation's Google Analytics data to include in our reports. This data will help us demonstrate the impact of our work and the potential benefits of localizing the documentation.
Additionally, we would be grateful for any feedback or suggestions from the HuggingFace team regarding our localization project. Your insights will be invaluable in ensuring our efforts align with your vision and standards, and in fostering a successful collaboration.
Thank you for considering our request. We look forward to your response and the opportunity to work together to expand the reach of the `huggingface/transformers` repository.
Best regards,
Hyunseo Yun, Kihoon Son, Gabriel Yang, Sohyun Sim,
Nayeon Han, Woojun Jung, Wonhyeong Seo
The Localization Initiative members of Pseudo Lab<|||||>Hey @wonhyeongseo, thanks for all you work on translating the documentation to Korean!
Do you mind contacting me at lysandre at hf.co so we may see how best to help you?<|||||>Welcome to a simple guide on how to use ChatGPT to speed up the translation process. By following these guidelines, you can create a first draft in less than an hour. Please note that it is essential to proofread your work thoroughly before sharing it with your colleagues.
(Optional) If you want to extract only the content without code blocks, tables, and redundant new lines, you can use the command `sed '/```/,/```/d' file.md | sed '/^|.*|$/d' | sed '/^$/N;/^\n$/D'`. In case you are using a mobile device, you can check the link https://sed.js.org/ for using sed online.
To initiate the translation process, you need to provide your sentences as input to ChatGPT. Your first prompt should look like this:
```mdx
What do these sentences about Hugging Face Transformers (a machine learning library) mean in Korean? Please do not translate the word after a ๐ค emoji as it is a product name.
```md
<your sentences>
```
After submitting the first prompt, you can use the following prefix for the next ten prompts:
```mdx
```next-part
<your sentences>
```
*Note that after ten prompts, you must remind ChatGPT of the task if you are not using LangChain.*
By following these guidelines, you can create a first draft of your translation in a shorter time frame. However, it is crucial to emphasize that the quality of the final output depends on the accuracy of the input and the proofreading process.
PS: Please note that we do not have a Korean LLM that can automate the proofreading process at the moment. However, in July, Naver plans to launch their HyperCLOVA Korean LLM model, which might automate the entire process. We are optimistic that our government proposal will be accepted, allowing us to increase our talent pool and work towards achieving a more automated translation process with them.<|||||>Dear @LysandreJik ,
I hope you are doing well. I wanted to inform you that I have sent an email with the subject line "[i18n-KO] Request for Collaboration: Hugging Face Mentorship Program." Whenever you have a moment, please take a look and provide a response. Thank you so much for your interest to this collaboration. If you have any questions, please don't hesitate to contact me.
Best regards,
Wonhyeong Seo<|||||>@gabrielwithappy @sim-so @jungnerd @HanNayeoniee @0525hhgus @KIHOON71
From this merge of `model_sharing.mdx` #22991 , I learned that we don't have to `git rebase -i` as other open source libraries mandate. Therefore, I propose we commit in 4 steps like this:
1. `docs: ko: <file-name>` - As we always do for the first commit. Copy the initial English file under `ko` and edit TOC: both external and (soon-to-be-automated) internal.
> From this point forward, you may need to squash commits in each step.
2. `feat: [nmt|manual] draft` - Machine-translate the entire file with: dedicated translators, prompts, or any kind of automation. You may choose to translate manually, and that is ok as long as you specify it in the commit message.
3. `fix: manual edits` - Proofread the draft thoroughly.
4. `fix: resolve suggestions` - Get reviews and resolve suggestions.
With this, it will be easier for collaborators to see the original English and your changes side by side. Not to mention, we can use diffs as pre-training data for the in-house rlhf translation model.
@ArthurZucker @sgugger , when merging a PR, how is the main commit message decided if there are multiple commits? Do you have to manually write it, or is the first commit message of the PR selected? Thank you for your insights and continued support. Much love from Korea ๐ฐ๐ท๐๐๐<|||||>The main commit message is the title of the PR.<|||||>Hey all! As some people were interested in a place to discuss about translations, we opened a category in the [HF Discord server](http://hf.co/join/discord) with a category for internationalization and translation efforts, including a Korean channel! |
transformers | 20,178 | closed | Support Bloom models in distillation example | # What does this PR do?
This updates the model distillation example to support more models on the Hub, including Bloom models.
I use this code to create smaller, monolingual generative models based on mGPT or Bloom. Demo notebook: https://colab.research.google.com/drive/1DJLNA4TcW45HYzuSDAVb8t6n7OTU1NyI?usp=sharing
- Support Bloom model and tokenizer in distillation example scripts
- Handles Hub model and tokenizer names with a slash, such as 'sberbank-ai/mGPT'
- Allow models without `max_position_embeddings` and tokenizers without set `max_model_input_sizes`
- Add `max_model_input_size` as a CLI param to make it easier to train on one GPU and support tokenizers without `max_model_input_sizes`
- remove `git log` call and `git-python` dependency
----
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes?
- [ ] Did you write any new necessary tests?
| 11-11-2022 19:27:12 | 11-11-2022 19:27:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20178). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20178). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20178). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,177 | closed | Add missing ESM autoclass | The autoclass for `EsmForMaskedLM` was missing, this adds it back! | 11-11-2022 15:40:43 | 11-11-2022 15:40:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 20,176 | closed | Add GPT-SW3 models to huggingface | ### Model description
At AI Sweden we are developing GPT models for the nordic region. Languages include English, Swedish, Danish, Norwegian and Icelandic.
The models are of the GPT family.
The models will range in size from 126m to 20B.
They are trained from scratch on a large corpora of 320B tokens.
They are trained with the nemo megatron framework and has a sentencepiece tokenizer.
The weights are not shared yet and we intend to share them through huggingface as well as publishing our training process and results.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Training framework: https://developer.nvidia.com/nemo/megatron | 11-11-2022 14:29:16 | 11-11-2022 14:29:16 | PR merged! :partying_face: |
transformers | 20,175 | closed | [ROC_BERT] Make CI happy | # What does this PR do?
Fixes a small slow test that was failing when trying to play with 8-bit conversion for BERT models
cc @ydshieh
| 11-11-2022 12:51:19 | 11-11-2022 12:51:19 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20175). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @younesbelkada Thank you for trying make CI happy.
But I am not sure why we need `torch.allclose` for 2 integer tensors. This should be used for float tensors. For the integer outputs (here token ids), they should be equal, so `self.assertEqual` would be the one to use I think.<|||||>Thanks @ydshieh for the heads up, indeed I believe we can use that. Let me update it<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20175). All of your documentation changes will be reflected on that endpoint.<|||||>ahah merci beaucoup @ydshieh !! |
transformers | 20,174 | closed | Add `accelerate` support for `ViT` family | # What does this PR do?
This PR adds `accelerate` support for ViT models, therefore these models can be loaded in 8-bit as follows:
```
# pip install accelerate bitsandbytes
from transformers import ViTFeatureExtractor, ViTForImageClassification
from PIL import Image
import requests
url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model_8bit = ViTForImageClassification.from_pretrained('google/vit-large-patch32-384', device_map="auto", load_in_8bit=True)
outputs_8bit = model_8bit(**inputs)
logits = outputs_8bit.logits
# model predicts one of the 1000 ImageNet classes
predicted_class_idx = logits.argmax(-1).item()
print("Predicted class:", model_8bit.config.id2label[predicted_class_idx])
```
This PR introduces the first 8-bit compatible vision model.
The same script works for `deit` too
Putting the PR as a draft as I have few questions!
cc @NielsRogge @sgugger | 11-11-2022 12:44:17 | 11-11-2022 12:44:17 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20174). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20174). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20174). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20174). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,173 | closed | Misleading tensor type in code and documentation of Wav2Vec2ForPreTraining | The Wav2Vec2ForPreTraining expects in its forward pass a torch.BoolTensor for the sampled_negative_indices.
https://github.com/huggingface/transformers/blob/cbbeca3d1733aa7c9b443af5ff231a5affcd8a1e/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1404
But as the example and the code then indicates, this tensor should not be a boolean mask but should instead store the indices of the negative sample. Therefore its expected type should be long.
This was misleading and lead to a silent bug in my codes for quite long as I would send a boolean tensor fo the negative indices instead of a long tensor.
Is my reading of the code is correct, could we change the expected type of sampled_negative_indices ?
Maybe @patrickvonplaten ( as it seems that he implemented the code) could confirm my reading?
It could be worth clarifying the name of mask_time_indices as well, since this one expects a mask and therefore a boolean and not a long tensor. It is also converted to a boolean tensor directly in the beginning of the forward call so this could be misleading for others as well.
Thank you for the implementation and help! | 11-11-2022 11:12:42 | 11-11-2022 11:12:42 | cc @sanchit-gandhi <|||||>Hey @PierreOrhan,
You're entirely correct, the variable `sampled_negative_indices` is a tensor of integers of dim `(batch_size, sequence_length, num_negatives)` that specifies the positions (indices) of quantised vectors to use as negatives during pre-training, not a boolean mask. Would you like to open a PR to fix this? ๐ค You can tag me for a review!
Changing the name of the arg `mask_time_indices` would be a breaking change, but we can certainly update the docstring to clarify this!<|||||>Sure!
While we are at it, in the original paper and fairseq implementation, there is a scaling of the feature extractor gradient by 0.1 for the base architecture. I can add that gradient scaling if you wish too, it really helped my trainings. This would require either to use a constant value (0.1, but since its not used for the Large model this might not be the best idea) or to change the config class of Wav2vec2 to allow the setting of this multiplier.<|||||>This is specifically for pre-training right? We could modify the pre-training scripts accordingly!
https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining<|||||>Hey @PierreOrhan! Let me know if you're still interested in opening a PR to fix this! It would be awesome to update the Wav2Vec2 code to prevent further silent bugs like this occurring for other users in the future!
We can also look into the gradient scaling in a separate PR if you want! Happy to help with this integration too (think it could be beneficial for training)<|||||>Sure, I have this on my todo list and should be able to submit in a month or so (finishing a project right now), it will take me time since I never oppened a PR on a large repo so I need to understand the whole process.
Best,<|||||>Hey @PierreOrhan! Sounds good! Best of luck with finishing off your current project!
Exciting that this will be your first PR on a large repo! I'll be on hand to help you with this process to make it as easy as possible! We have a pretty comprehensive guide for opening a PR on transformers: https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md Feel free to have a skim through this guide to get a feel for the steps involved, and don't hesitate to ask any questions here or on a draft PR! More than happy to help!
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,172 | closed | Remove Optional[PILImageResampling] typing | # What does this PR do?
Resolves bug if Pillow < 2.9.1 resulting from type checks. If lower versions of Pillow are used, `PILImageResampling` is an alias for the `PIL.Image` module c.f. [definition here](https://github.com/huggingface/transformers/blob/d3c05666798bd8fcc7a03564436e5100b080a5df/src/transformers/image_utils.py#L48). When defining`var: Optional[obj] = None` - specified `obj` in `Optional` can't be a module - so it fails e.g. [here](https://github.com/huggingface/transformers/blob/d3c05666798bd8fcc7a03564436e5100b080a5df/src/transformers/models/segformer/image_processing_segformer.py#L231).
The PR also replaces any remaining `PIL.Image.Resampling` with `PILImageResampling` in the codebase.
Another option is to define a new type. I decided to go for the fastest resolution - LMK if you think this is better.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 11-11-2022 09:55:34 | 11-11-2022 09:55:34 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20172). All of your documentation changes will be reflected on that endpoint. |
transformers | 20,171 | closed | How to train transformer using my own data? | ### Feature request
I have not see a tutorial for training my own data.....Can I ? I am a new AI learner. Thank you!!!
### Motivation
I have not see a tutorial for training my own data.....Can I ? I am a new AI learner. Thank you!!!
### Your contribution
I have not see a tutorial for training my own data.....Can I ? I am a new AI learner. Thank you!!! | 11-11-2022 09:34:23 | 11-11-2022 09:34:23 | Here is a tutorial in the transformers Doc on how to fine tune a pretrained model using the Transformers library (https://huggingface.co/docs/transformers/training), you can also check the Hugging face course https://huggingface.co/course/chapter1/1, for more help you will need to precise what do you want to do exactly
Besides, I noticed that you opened another issue #20187 for the exact same question. You should always avoid doing that you're just making the maintainers job harder, you should close your new issue. Also, I personally think it will be better to ask these kinds of questions on the Hugging Face's forum (https://discuss.huggingface.co/) or discord server (https://discord.com/invite/JfAtkvEtRb).<|||||>Check out the free HuggingFace course: https://hf.co/course. |
transformers | 20,170 | closed | I am getting below error message when loading XLM 17 Language pretrained model. | Some weights of XLMWithLMHeadModel were not initialized from the model checkpoint at xlm-mlm-17-1280 and are newly initialized: ['transformer.position_ids']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. | 11-11-2022 09:33:15 | 11-11-2022 09:33:15 | Can anyone help me how to use this model for language translation?
<|||||>You can disregard the warning, it's issued wrongly (and it's been fixed on main).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 20,169 | closed | trocr deepspeed | ### System Info
- `transformers` version: 4.24.0
- Platform: Linux-5.10.14-1.el7.elrepo.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
Hi @stas00,
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am using huggingface integrated deepspeed to train TROCR in jupyterlab.
For both load method are the same issue.
`model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained()`
`model = VisionEncoderDecoderModel.from_pretrained()`
### Expected behavior
However, I met an issue
`AttributeError: 'VisionEncoderDecoderConfig' object has no attribute 'hidden_size'`
Do we have any configuration to fix this? | 11-11-2022 01:04:06 | 11-11-2022 01:04:06 | issue resolved by
`hidden_size = 768`
`setattr(model.config, "hidden_size", hidden_size)`
refer to this #15526 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.