repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
19,867
closed
[wip test doc build]
null
10-25-2022 10:00:18
10-25-2022 10:00:18
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19867). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19867). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,866
closed
[wip test doc-build]
null
10-25-2022 08:54:29
10-25-2022 08:54:29
transformers
19,865
open
Add VATT model
### Model description Hey, as discussed with @NielsRogge a few weeks back, I'd like to work on adding the "VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text" model from Google. It is basically three transformers(Video/Audio/Text) that are trained jointly in an unsupervised manner using contrastive loss functions. For downstreams tasks they fine-tune the Transformers separately, but also explore a version that shares the weights for all modalities. For Pre-Traning they use text-video-audio triplets from HowTo100M and video-audio pairs from AudioSet. The authors describe how to fine-tune VATT for vision and audio classification tasks and provide weights for the fine-tuned versions. The backbone for vision is ViT, for audio WaveFormTransformer and for text they are using BERT/T5 ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Paper: https://arxiv.org/pdf/2104.11178.pdf GitHub: https://github.com/google-research/google-research/tree/master/vatt
10-25-2022 08:36:37
10-25-2022 08:36:37
@johko have you started implementing it?<|||||>@fcakyon yes I have started, but progress is still rather slow, as that is my first model contribution and I have to figure out some stuff. <|||||>@johko I totally understand it. Interested in your implementation since I will be using VATT in my research next year :) Are you working on a TF implementation?<|||||>> @johko I totally understand it. Interested in your implementation since I will be using VATT in my research next year :) > > Are you working on a TF implementation? Sorry for the late reply (again 🙈). Yes I'm working on a TF implementation. As the original repo is using it, I'm first doing that and then see about pytorch.<|||||>@johko, thanks for the response! I may also help with the pytorch part once you finalize the TF implementation 👍 <|||||>@fcakyon that would be great, as my expertise is more in TF 🙂<|||||>Hey @NielsRogge , I'm sorry but I think I have to stop working this for good. I'd love to finish it, but every time I think now I finally have some time to do it, something else comes around :disappointed: I think I just can't provide a big contribution like this atm and will rather focus on smaller things. But maybe @fcakyon wants to pick up on it. Sorry for blocking this so long.
transformers
19,864
closed
Whisper's Tokenizer encoding function is not user-friendly
### System Info - `transformers` version: 4.23.0.dev0 - Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - Huggingface_hub version: 0.10.0 - PyTorch version (GPU?): 1.12.1+cu102 (True) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu) - Jax version: 0.3.14 - JaxLib version: 0.3.14 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? Text encoded by Whisper should by default have a EOS token in the end (just like any sequence-to-sequence transformer model) and also include the correct other tokens if necessary. Also it should **not** return the `attention_mask` as Whisper never uses an `attention_mask`. **Note**: Encoding of text tokens is only needed for fine-tuning the model, not for inference, so this bug/feature request is not relevant for inference. Currently when doing: ```python from transformers import WhisperTokenizer tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small") tokenizer("hey") ``` One gets: ``` {'input_ids': [17230], 'attention_mask': [1]} ``` There are multiple problems with this: - The EOS token is not appended, but whisper **always** needs an EOS for training - There should be the following config parameters for the whisper tokenizer: - Set a language id so that the tokenizer automatically adds the lang id prefix - Set a "use_timestamps" true/false flag to the tokenizer that decides whether the "notimesteps" token should be added or not - Remove this line: https://github.com/huggingface/transformers/blob/371337a95b5d82cc9376c2595ed2022a5eb2ee6e/src/transformers/models/whisper/tokenization_whisper.py#L119 as this is not relevant for whisper and a copy-paste from GPT2 -> whisper should never set the EOS token as the BOS token IMO (@sanchit-gandhi we did this for our paper, but this was more a lucky bug than a solid case) - Whisper should **not** return the `attention_mask` => Let's try to make whisper as user-friendly as possible for fine-tuning. Say I'd like to fine-tune on a multi-lingual language If I remember correctly the format for the **labels** to fine-tune whisper on multi-lingual is: ``` <lang-id><|notimestamps|> ... text ... <eos> ``` @ArthurZucker or is it first `<|notimestamps|>` and then `<lang-id>` and then <eos> ? The `decoder_input_ids` should then just add `<|startoftranscript>` and remove `<eos>` which happens automatically in the shift function. Now I should be able to get this encoding behavior by default when doing: ``` from transformers import WhisperTokenizer tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", lang_id="dv", predict_timestamps=False) tokenizer("hey") ``` ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import WhisperTokenizer tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", lang_id="dv", predict_timestamps=False) tokenizer("hey") ### Expected behavior ``` <lang-id><|notimestamps|> ... text ... <eos> ```
10-25-2022 08:31:23
10-25-2022 08:31:23
@ArthurZucker @sanchit-gandhi could you take a look here? <|||||>Agree with all of the above **other** than the attention mask: the Whisper tokeniser **should** return an attention mask. This attention mask is not used to mask hidden-states (as is done with the Whisper feature-extractor), but rather mask padded token ids in the computation of the C.E. loss. We require the attention mask to inform the system where the padded tokens reside, and thus where to ignore terms in the loss computation.
transformers
19,863
closed
Fix doctest for `GenerationMixin.contrastive_search`
# What does this PR do? Just update the expected value to use `'` instead of `"`.
10-25-2022 08:05:36
10-25-2022 08:05:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,862
closed
Trainer RuntimeError CUDA error
### System Info ### Versions i`ve tried using - transformers==4.15.0, 4.8.0 and latest - python 3.8, 3.9 and 3.10 ### nvidia-smi NVIDIA-SMI 515.65.01 Driver Version: 515.65.01 CUDA Version: 11.7 Thanks! ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ### Actual Error: Class: `transformers/trainer.py` Failing at: `tr_loss = torch.tensor(0.0).to(args.device)` Returns: `RuntimeError: CUDA error: invalid argument` ### Code I am following several examples and getting the same above error and the same line. As a reference you can reproduce it according to this [code sample](https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/blob/main/gpt_neo.py) The exact same error occur with other huggingface training examples. ### Also tried - if i am running the line `tr_loss = torch.tensor(0.0).to(args.device)` as a standalone its working fine - Also tried to run this line as part of gpt_neo.py in the above example and it worked fine but later failed as part of `transformers/trainer.py` - I made sure the CUDA is running fine: `torch.cuda.is_available()` - Running only `torch.tensor(0.0)` works fine, only when adding `.to(device)` its failing ### Expected behavior No errors at `torch.tensor(0.0).to(device)`
10-25-2022 07:41:00
10-25-2022 07:41:00
This looks linked to your particular setup. Can you add a print of `args.device` in the script you are running and copy-paste the result of `transformers-cli env` (as was requested in the template)?<|||||>thanks for the prompt reply and sorry for missing the `transformers-cli env` `args.device` cuda:0 Env: - transformers version: 4.23.1 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.8.15 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.1 (cpu) - Jax version: 0.3.23 - JaxLib version: 0.3.22 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> (i`ve also tried with other PyTorch & transformers versions)<|||||>i hope its ok that i putting a link to different ML library (in case its not - i`ll delete it) [this issue seems to be similar](https://github.com/Lightning-AI/lightning/issues/11818#issuecomment-1033541994)<|||||>It doesn't look exactly similar in the sense that it is in an environment without a GPU, whereas yours shows one. Unless you are not executing the script within the exact same env as the results of the commands passed above of course.
transformers
19,861
closed
Finetuning transformers for long document summarisation
I'm wondering if there are any sample codes or blogs which can help me understand finetuning of transformer models for long document summarisation?
10-25-2022 07:37:21
10-25-2022 07:37:21
@patil-suraj @patrickvonplaten <|||||>Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep the issues for bugs and feature requests only.<|||||>Sure, and apologies. I'll ask on the forum now. Thanks for the information.
transformers
19,860
closed
Can run_translation.py support nllb model fine-tuning ?
### Feature request Can run_translation.py support nllb model fine-tuning ? As run_translation.py is much easier to fine-tuning a model. ### Motivation Want to an easy way to fine-tuning nllb model, as it is so difficult to fine-tuning a nllb model from its docs. ### Your contribution star the repository
10-25-2022 07:29:18
10-25-2022 07:29:18
The script support NLLB models, if you run any issue you can use the [forums](https://discuss.huggingface.co/) to ask the community for help.
transformers
19,859
closed
[WIP] Add type hints to layoutlmv2 model
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-25-2022 07:27:33
10-25-2022 07:27:33
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19859). All of your documentation changes will be reflected on that endpoint.<|||||>I think you're annotating a bunch of TF types into a Torch modeling file here! LayoutLMv2 does not have a TF port (v1 and v3 do, though)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,858
closed
Add type hints to TFPegasusModel
# What does this PR do? This adds type annotations to `TFPegasusModel` and `TFPegasusForConditionalGeneration` as part of #16059 ## Who can review? @Rocketknight1 Thanks :)
10-25-2022 06:52:40
10-25-2022 06:52:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>Looks perfect now. Thank you!
transformers
19,857
closed
Import Error: cannot import name 'TFBertTokenizer' from 'transformers'
### System Info Platform - Mac os Python Version - 3.9.0 Tensorflow version - 2.10.0 Transformers version - 4.23.1 (Tried different version) ### Who can help? @LysandreJik ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction when i try to import TFBertTokenizer using the statement “from transformers import TFBertTokenizer” i come across the below error. ImportError: cannot import name ‘TFBertTokenizer’ from ‘transformers’ I am able to import BertTokenizer though. Did something change ? If i recollect i was able to import TFBertTokenizer too in the past. i also happen to check the code base and the class TFBertTokenizer still exists as part of the transformer package Steps to Reproduce: 1) pip install transformers 2) In a new shell execute the below statement 'from transformers import TFBertTokenizer' ### Expected behavior TFBertTokenizer should be imported similar to BertTokenizer.
10-25-2022 04:58:36
10-25-2022 04:58:36
`TFBertTokenizer` is in the main init but it's a fairly recent addition. Are you certain you have the latest version of Transformers? You can print it with `from transformers import __version__; print(__version__)`<|||||>@sgugger For some reason transformer version was showing 4.20.1 when i ran the above command. I updated it to 4.23.1 and now i dont see any error. Thank you fo your time. <|||||>how u are update the version?????<|||||>@Amlalqhtani pip install --upgrade transformers
transformers
19,856
closed
Removing BertConfig inheritance from configuration_roberta.py
Related to https://github.com/huggingface/transformers/issues/19303 Pinging @sgugger for review
10-25-2022 04:53:03
10-25-2022 04:53:03
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19856). All of your documentation changes will be reflected on that endpoint.
transformers
19,855
closed
Removing BertConfig inheritance from RoBERTa configuration
Related to https://github.com/huggingface/transformers/issues/19303
10-25-2022 04:47:54
10-25-2022 04:47:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19855). All of your documentation changes will be reflected on that endpoint.
transformers
19,854
closed
Simple PyPlot for beginners
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-25-2022 03:46:23
10-25-2022 03:46:23
_The documentation is not available anymore as the PR was closed or merged._<|||||>Spam
transformers
19,853
closed
setting pipeline `tokenizer.pad_token_id` - bug / mistake in error message
### System Info transformers version 4.18.0, Python 3.8.0 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Run text generation pipeline using model with tokenizer without pad_token. In my case: `generator = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B', device=0)`. Running batched text generation:`generator(texts, ..., batch_size=8)` gives error message: "ValueError: Pipeline with tokenizer without pad_token cannot do batching. You can try to set it with `pipe.tokenizer.pad_token_id = model.config.eos_token_id`". Running `generator.tokenizer.pad_token_id = generator.model.config.eos_token_id` gives error message: `TypeError: 'int' object is not iterable` Running `generator.tokenizer.pad_token_id = '\n'` works, transformers converts the string into indice(s) internally. ### Expected behavior Right-hand-side of line setting pad_token_id should be an ID (int), not a string. Code in error message should run as-is
10-25-2022 01:44:13
10-25-2022 01:44:13
cc @Narsil and @gante <|||||>@morrisalp can you share a more detailed example ? The following code seems to work: ```python from transformers import pipeline generator = pipeline("text-generation", model="EleutherAI/gpt-neo-1.3B", device=0) generator.tokenizer.pad_token_id = generator.model.config.eos_token_id ```<|||||>@Narsil Your code throws an error on my end. I receive this error: > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > Cell In [4], line 1 > ----> 1 generator.tokenizer.pad_token_id = generator.model.config.eos_token_id > > File /disk2/morrisalper/notebooks/env/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:1169, in SpecialTokensMixin.pad_token_id(self, value) > 1167 @pad_token_id.setter > 1168 def pad_token_id(self, value): > -> 1169 self._pad_token = self.convert_tokens_to_ids(value) > > File /disk2/morrisalper/notebooks/env/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py:251, in PreTrainedTokenizerFast.convert_tokens_to_ids(self, tokens) > 248 return self._convert_token_to_id_with_added_voc(tokens) > 250 ids = [] > --> 251 for token in tokens: > 252 ids.append(self._convert_token_to_id_with_added_voc(token)) > 253 return ids > > TypeError: 'int' object is not iterable > <|||||>I can also confirm that the snippet shared above works on my end -- @morrisalp, if possible, can you update `transformers` to a more recent version? I'm afraid we can't fix bugs on older versions :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,852
closed
Add BERT resources
This PR kicks off #19848 and adds official resources for BERT to the BERT model doc page. The resources are grouped by the types of tasks you can use the model for as well as using it for some other applications like inference and deployment. If possible, I think it'd also be cool to use the task icons we use on the Hub and Tasks page (see below for example). What do you think? ![Screen Shot 2022-10-24 at 1 26 25 PM](https://user-images.githubusercontent.com/59462357/197622917-a9978857-e0b4-4d42-b066-a4b44574c792.png)
10-24-2022 20:07:02
10-24-2022 20:07:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>I really like this!<|||||>Awesome, I'll start on some of the other models then! In the meantime, I'll check with @mishig25 if we can use the icons in the docs :)<|||||>We don't have an icon for multiple choice, so would it be ok to add it under question answering (since its kind of a variant of question answering)?<|||||>It's more a variant of sequence classification technically.<|||||>Ah ok! Would it be too confusing to include it under sequence classification then? We can also just leave it as is 😄
transformers
19,851
closed
Vilt support v1.9
Skips the tests if the installed torch version is lower than 1.10. Partially fixes https://github.com/huggingface/transformers/issues/18817
10-24-2022 19:42:51
10-24-2022 19:42:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,850
closed
Support Roberta on `accelerate`
# What does this PR do? This PR adds `Roberta` model family support with `accelerate` ! This aims to support `int8` quantization for these models. Before merging I have notice few nits that I need to fix and discuss! I am unsure whether these fixes should be here or in `accelerate`. 1- If the model has the attribute `_keys_to_ignore_on_save`, it seems that it does not get properly initialized by `accelerate` (but I might be missing something here). AFAIK all models that has `accelerate` support for now, has at most the attribute `_keys_to_ignore_on_load_missing` but not `_keys_to_ignore_on_save`. Therefore [when the base model gets saved in the `accelerate` test](https://github.com/younesbelkada/transformers/blob/aefea27619b43cbb3e518ea972009e525676dc8d/tests/models/roberta/test_modeling_roberta.py#L587), these parameters does not get saved in the state_dict. I had to come up with a modification in the `_load_pretrained_model` function to randomly initialize these parameters since they're ignored by the `_load_state_dict_into_meta_model`. This post processing trick happens [here](https://github.com/younesbelkada/transformers/blob/aefea27619b43cbb3e518ea972009e525676dc8d/src/transformers/modeling_utils.py#L2598). 2- Therefore I had to change the `accelerate` tests. Since the parameters that are assigned in the `_keys_to_ignore_on_save` are initialized randomly, I propose to check the logits compatibility between the base model and the accelerate model only for the attention outputs and not the `lm_head` output. These modifications happens [here](https://github.com/younesbelkada/transformers/blob/aefea27619b43cbb3e518ea972009e525676dc8d/tests/models/roberta/test_modeling_roberta.py#L569) - maybe this modification could happen in the super class? 3- Last nit: in the `accelerate` tests, it is better to not override the variable `inputs_dict` since inside the main loop, we can switch from a `xxxForMultipleChoice` to a `xxxForQuestionAnswering` model. Therefore, this variable does not get modified by the class function `_prepare_for_class` since it [gets modified only if the model is a `MODEL_FOR_MULTIPLE_CHOICE` model.](https://github.com/younesbelkada/transformers/blob/aefea27619b43cbb3e518ea972009e525676dc8d/tests/test_modeling_common.py#L165-L173) and not reset to the correct `inputs_dict` afterwards. Need also to fix the slow test for `lilt` model where I am having `ValueError: embeddings.position_ids doesn't have any device set.` cc @sgugger
10-24-2022 18:12:49
10-24-2022 18:12:49
_The documentation is not available anymore as the PR was closed or merged._<|||||>Closing in favor of #19906
transformers
19,849
closed
Update `max_diff` in `test_save_load_fast_init_to_base`
# What does this PR do? This test seems flaky. After looking a bit deeper, I am not sure if we should expect to get the same (or very close) weights with/without `_fast_init` for the deleted key. https://github.com/huggingface/transformers/blob/9ecb13d63a9524478656b2233e6fb4e9f15d3fbf/tests/test_modeling_common.py#L343 https://github.com/huggingface/transformers/blob/9ecb13d63a9524478656b2233e6fb4e9f15d3fbf/tests/test_modeling_common.py#L400 My intuition is that **the values for that deleted key could be different with 2 different init. methods.** I also change the way of `max_diff` being calculated.
10-24-2022 17:48:24
10-24-2022 17:48:24
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you @patrickvonplaten! Although I still have some doubt: Assumption we have: > now this initialization should be the same for both the fast and slow init method What we do > _init_weights weights is overwritten by a deterministic self._mock_init_weights But `_mock_init_weights` is only defined in the testing module, and basically it just does `data.fill_(3)`. So the assumption is only True in our own testing (which uses `_mock_init_weights`). This won't be the case when we want to load the model outside the testing. So I am not very sure the purpose of this testing. But good for me if we don't want to touch it. We probably need to add some common about the flakyness for some tests though. <|||||>@sgugger Could you check if the change in the way `max_diff` being calculated worth the merge 🙏 ? Thanks
transformers
19,848
open
Add more model resources
A continuation of #19767 to add existing official resources (blog posts, notebooks, scripts, etc.) directly to the model docs for 20 of the most popular architectures based on last month's pageviews. I'm not sure whether there are existing resources for all of these models, but I'll check it out, and if not we can either: * Move to the next most popular model * Or it could be a good opportunity to create some resources for it Tracking model progress and updates: - [x] BERT (see #19852) - [x] T5 (see #19878) - [x] RoBERTa (see #19911) - [x] GPT2 (see #19879) - [x] BLOOM (see #19881) - [x] BART (see #19928) - [x] ViT (assigned to @stanleycai95) - [x] DistilBERT (see #19930) - [x] Wav2Vec2 (see #19931) - [x] LayoutLMV3 (see #19932) - [x] CLIP (assigned to @ambujpawar, see #20190) - [x] LayoutLM (assigned to @avisinghal6) - [x] GPT-J (assigned to @adit299) - [x] TrOCR (assigned to @huangperry) - [x] LayoutLMV2 (assigned to @y3sar) - [ ] ALBERT (assigned to @ENate) - [x] OPT (assigned to @alissadb) - [x] DeBERTa (assigned to @Saad135, see #20155) - [x] OpenAI GPT (assigned to @shogohida, see #20084) - [x] XLM-RoBERTa (assigned to @hazrulakmal)
10-24-2022 17:13:44
10-24-2022 17:13:44
@stevhliu @NielsRogge is there a plan to add the additional vision tasks we currently support to [this page](https://huggingface.co/docs/transformers/task_summary#image-classification)? For example object detection, segmentation, video classification. Then there seems to be an entire section missing on multimodal models. I understand for these tasks, we don't have `pipeline`s yet. But I believe we can still enlist these tasks with our existing resources (notebooks, blog posts, scripts, etc.). Another suggestion is to provide consolidated model links for each of the tasks enlisted in that page. For example, for image classification, it could be https://huggingface.co/models?pipeline_tag=image-classification. WDYT? Cc: @osanseviero @nateraw <|||||>> @stevhliu @NielsRogge is there a plan to add the additional vision tasks we currently support to [this page](https://huggingface.co/docs/transformers/task_summary#image-classification)? Yes I've pinged @stevhliu yesterday about this ;) he's working on it<|||||>Amazing! @stevhliu, if possible could tag me in the PR when you open it? Would love to take a look. <|||||>For sure @sayakpaul! I'm working on a proposal to reorganize/update the tasks summary docs a bit to include the additional vision/multimodal tasks you mentioned. Also great suggestion to link to the models; we can use these recently added [icons](https://github.com/huggingface/doc-builder/pull/317) in the docs :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I am a newbie to open source and would like to contribute. @NielsRogge can I contribute to this issue?<|||||>Welcome @avisinghal6, we'd be more than happy for you to make a contribution! 🤗 The remaining models available are LayoutLM, LayoutLMV2, TrOCR, and OPT. Let me know which model you'd like to take on, and any questions you might have!<|||||>@stevhliu, I can start working on LayoutLM model. So I need to search for existing official resources on hugging face website and attach relevant ones to the documentation of LayoutLM model ?<|||||>Yup, check out issue #20055 for a list of resources to search from and the [BERT resources section](https://huggingface.co/docs/transformers/model_doc/bert#resources) for an example of what it should look like!<|||||>@stevhliu, I have added the resources for LayoutLM. Commit : d360021d9caef517854e371f8ac286f1c0a4f802<|||||>Hi @avisinghal6, would you mind opening a pull request on the Transformers repository for your contribution? You can check out this [guide](https://www.notion.so/huggingface2/Contribution-Guide-19411c29298644df8e9656af45a7686d#ed2eea8d355c497d9f05474e349f9f15) for more details how. Thanks! 😄 <|||||>Hi @stevhliu, I have created the PR#21377<|||||>Hi @stevhliu - I want to add resources for LayoutLMv2. Will submit a PR soon. <|||||>Looking forward to your contribution @SarangShrivastava!<|||||>hey, @stevhliu I want to work on any of the model if its available<|||||>Hey @SarangShrivastava , just checking to see if you're still interested in making a model contribution. Totally cool if you aren't available anymore, I'll unassign you from the model you claimed and let someone else take a stab at it. Thanks! @rajveer43 thanks for your interest! If any of the models free up, I'll let you know 🤗<|||||>Hello @stevhliu I would like to make my first contribution to transformers and this looks like a great place to start😄. Is there any way I can help? I can see all the models have been assigned is it possible to work on a model already assigned? Thanks in advance<|||||>Thanks for the interest; ~TrOCR~, ~LayoutLMV2~, and ~ALBERT~ are now available!<|||||>@stevhliu I would like to start working on LayoutLMV2. <|||||>@stevhliu I have a question regarding 🌎 emoji. Exactly which resources should be marked by it? Thanks in advance<|||||>🌎 is for unofficial Hugging Face resources, such as community-created ones. For example, if you look at the GPT2 [resources](https://huggingface.co/docs/transformers/v4.28.1/en/model_doc/gpt2#resources), there are links to notebooks for generating lyrics and tweets from contributors :)<|||||>@stevhliu thanks for the response. I noticed in BERT resources under text-classification a notebook by NielsRogge is not marked 🌎. Are [these notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/LayoutLMv2) considered official?<|||||>Hello @stevhliu I would like to make my first contribution to open source transformers and this looks like a great place to start. Is there any way I can help? Thanks in advance<|||||>Hi, thanks for your interest @unitinguncle! Currently, all the models are taken but I'll let you know if something is available. In the meantime, feel free to also take a look at some of the [Good First Issue's](https://github.com/huggingface/transformers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+First+Issue%22) to see if there is anything else you may be interested in!
transformers
19,847
closed
Small update to model addition guide
This PR is a smaller version of #19778 (shelved for now) which includes only the more important fixes for maintaining accuracy: * remove *call-for-model-addition* program * `cookiecutter` adds a `mdx` instead of a `rst` file * fix to do list so it doesn't have numbers and bullets
10-24-2022 16:52:44
10-24-2022 16:52:44
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,846
closed
Fix warning when collating list of numpy arrays
# What does this PR do? As reported in #19822 there is a warning issued by PyTorch when we try to batch the result of a feature extractor when `return_tensors="pt"` is not activated. This PR fixes it by stacking the list of NumPy array in a big array first. Fixes #19822
10-24-2022 16:06:03
10-24-2022 16:06:03
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,845
closed
Fix doctest for MarkupLM
# What does this PR do? The CI complains ``` UNEXPECTED EXCEPTION: SyntaxError('EOF while scanning triple-quoted string literal', ('<doctest markuplm.mdx[2]>', 7, 96, 'html_string = """\n <!DOCTYPE html>\n <html>\n <head>\n <title>Hello world</title>\n </head>\n <body>\n')) ``` due to the lack of `...` in some empty lines. But `make style` will remove those `...` after I add it to those lines. So I just removed those empty lines to pass the doctest.
10-24-2022 15:35:57
10-24-2022 15:35:57
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,844
closed
ImportError: libssl.so.3: cannot open shared object file: No such file or directory
### System Info Hi, I get the following error when calling `from transformers import BertModel, BertTokenizer`. The error: `ImportError: libssl.so.3: cannot open shared object file: No such file or directory` I tried the following (suggested in other similar threads): - instead of conda installation, use pip - downgrade tokenizers package Neither of those fixed the issue. Following versions were used: - Python 3.8.13 - tokenizers 0.13.1 - transformers 4.23.1 Any ideas how to fix this? ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction `from transformers import BertModel, BertTokenizer` ### Expected behavior No ImportError is expeted.
10-24-2022 15:20:38
10-24-2022 15:20:38
Hey, same problem, how to fix this?<|||||>After install transformers with anaconda on Ubuntu 22 my scripts wont run anymore with tons of these ImportErrors: (ie.) libffi.so.7: cannot open shared object file: No such file or directory Removed and created a new environment with fresh install of dependancies.<|||||>Downgrading to `tokenizers=0.10.3` seems to work as an interim fix. The main issue appears to be from a conflict between some libraries (e.g. TF) requiring OpenSSL<=1.1.1, and tokenizers using OpenSSL3. I think that this can be circumvented by using transformers from pip or source, although in my particular project conda/mamba is a stricter requirement. Solution is basically discussed here: https://discuss.huggingface.co/t/importing-tokenizers-version-0-10-3-fails-due-to-openssl/17820/3 (@theRealMachineWhisperer, I can imagine your frustration, but please be kind to open source developers and try to frame your challenges constructively <3)<|||||>Downgrading the tokenizers leads to downgrading the transformers library as well; So if you needed to use a model/feature in the newer version, it would not work. I fixed this issue by using pip to install transformers (instead of conda).<|||||>Fixed my issue by running apt install libffi7<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> Downgrading the tokenizers leads to downgrading the transformers library as well; So if you needed to use a model/feature in the newer version, it would not work. I fixed this issue by using pip to install transformers (instead of conda). thanks, i tried pip install transformers and it works.<|||||>> You can also try `pip install transformers --force-reinstall` I've found this can help refresh dependency versions. Do note, there may be an underlying compatibility concern in your environment.
transformers
19,843
closed
Pretraining score (bpc) does not decrease after pretraining saving and loading a Longformer model.
### System Info Python 3.7.4 torch==1.12.0+rocm5.1.1 torchaudio==0.12.0+rocm5.1.1 torchmetrics==0.8.0 torchvision==0.13.0+rocm5.1.1 transformers==4.20.1 datasets==2.3.2 ### Who can help? @ydshieh ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I am using a modified version of the [notebook](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb) provided in the official Longformer [repository](https://github.com/allenai/longformer). I had to change the recommended versions in that repository in order to be able to load with a specific model the recommended Transformers version would not load properly. This caused the need for slight modifications in the code in order to make it suitable for newer Transformers and Datasets versions. Dataset handling was also modified in order to use a custom dataset. I also changed the code in order to convert the roberta-4096 to a Longformer-4096 so it can be used more easily for downstream tasks. However, the main body of the my code very similar to theirs. The main problem comes after pretraining and saving the model in the following piece of code. ` """### Pretrain and Evaluate on masked language modeling (MLM) The following functions pretrain and evaluate a model on MLM. """ def pretrain_and_evaluate(args, model, tokenizer, eval_only, model_path, logger, max_length=512, separate_documents=False, cache_dir=None): data_files = {} data_files["validation"] = args.val_datapath if not eval_only: # data_files["train"] = args.val_datapath data_files["train"] = args.train_datapath extension = "text" datasets = load_dataset(extension, data_files=data_files, cache_dir=cache_dir) val_dataset = load_and_preprocess(datasets["validation"], tokenizer, max_length, separate_documents, logger) if eval_only: train_dataset = val_dataset else: logger.info(f'Loading and tokenizing training data is usually slow: {args.train_datapath}') train_dataset = val_dataset # TODO: remove this comment # train_dataset = load_and_preprocess(datasets["train"], tokenizer, max_length, separate_documents, logger) logger.info(f'Dataset loaded and pre-processed') logger.info(f'Set up Trainer...:') data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15) trainer = Trainer(model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=val_dataset) # train_dataset=train_dataset, eval_dataset=val_dataset, prediction_loss_only=True,) logger.info(f'Initial evaluation...:') eval_loss_pre = trainer.evaluate() eval_loss_pre_int = eval_loss_pre['eval_loss'] logger.info(f'Initial eval bpc: {eval_loss_pre_int/math.log(2)}') bpc_results_in = {"ini_bpc": eval_loss_pre_int} if not eval_only: logger.info(f"Start pretraining from checkpoint {model_path}") trainer_results = trainer.train(resume_from_checkpoint=model_path) logger.info(f"Trainer results metrics: {trainer_results.metrics}") logger.info(f"Saving after pretraining in {model_path}") trainer.save_model(output_dir=model_path) logger.info(f'Mid evaluation...:') eval_loss_post = trainer.evaluate() eval_loss_post_int = eval_loss_post['eval_loss'] logger.info(f'Eval bpc after pretraining: {eval_loss_post_int/math.log(2)}') bpc_results_in["end_bpc"] = eval_loss_post_int model_copy = LongformerForMaskedLM.from_pretrained(model_path) trainer_copy = Trainer(model=model_copy, args=args, data_collator=data_collator,train_dataset=train_dataset, eval_dataset=val_dataset) eval_loss_pre_copy = trainer_copy.evaluate() eval_loss_pre_copy_int = eval_loss_pre_copy['eval_loss'] logger.info(f'Eval bpc after pretraining and loading: {eval_loss_pre_copy_int/math.log(2)}') return bpc_results_in ` Where the variable model has been previously loaded as: ` tokenizer = LongformerTokenizerFast.from_pretrained(model_path_longformer, cache_dir=model_args.cache_dir, model_max_length=4096) model = LongformerForMaskedLM.from_pretrained(model_path_longformer, cache_dir=model_args.cache_dir) ` And the function is called as: ` bpc_results = pretrain_and_evaluate(training_args, model, tokenizer, eval_only=False, model_path=model_path_longformer, logger=logger, cache_dir=model_args.cache_dir, max_length=model_args.max_pos, separate_documents=data_args.separate_documents) ` As you may observe I am: - Making and initial evaluation of the model (BPC0). - Pretraining the model and saving it. I am pretraining one step on the validation set in order to reduce computation. - Making an evaluation of the model (BPCa). - Loading the pretraining model another evaluation (BPCb). Since I am in a multi-node multi-gpu setting, I am using torch.distributed.run to launch the script. I am also setting these variables to False in the script arguments: ` --log_on_each_node False \ --save_on_each_node False \ ` ### Expected behavior I am trying to pre-train a Longformer on a custom text dataset starting from a RoBERTa checkpoint as indicated in the Longformer repository. From a normal execution, one would expect BPC0 > BPCa and BPCa == BPCb. However, what I am experiencing is BPC0 (2.0194287300109863) > BPCa (2.016040325164795) and BP0 == BPCb (2.0194287300109863). Which means that the pretraining seems to be working succesfully but somehow after saving and loading the model, the weights are not being managed/saved properly.
10-24-2022 14:24:11
10-24-2022 14:24:11
Hi @muxitox. If I understand correctly, the issue is about training->saving->loading->eval won't give the same result as the eval result before saving. Could you provide a self-contained complete code snippet that can reproduce the issue? You probably don't need a full training (we can't debug if the training needs 2 days). - By `self-contained complete`, it means we can copy a single block of code and run it directly. - Could you try with a much smaller model that runs on a single GPU? You can also feed the model with shorter input sequence. Thanks a lot! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry for the delay. Did the experiment you suggested. In code_reqs.zip you can find the `convert_model_to_long_updated_github.py` script with a minimal setup to reproduce what you said in one gpu. You can also find in requirements.txt all the packages installed in the environment and in requirements_minimal.txt the minimal requirements I think you should need to make this going. [code_reqs.zip](https://github.com/huggingface/transformers/files/10122396/code_reqs.zip) I have obtained the following results: `{"ini_bpc": 1.6242934465408325, "end_bpc": 1.5721702575683594, "end_bpc_review": 1.5611544847488403, "proj_end_bpc": 1.5611544847488403, "bef_proj_end_bpc": 1.5611544847488403, "roberta_ini_bpc": 1.4125990867614746, "roberta_end_bpc": 1.0448211431503296, "roberta_end_bpc_review": 1.0905007123947144}` Where `ini_bpc` is the bpc before pre-training the longformer, `end_bpc` is the bpc after pre-training, `end_bpc_review` is the bpc after saving and reloading the pre-trained model, `bef_proj_end_bpc` is the bpc before manually modifying the global attention as suggested in the Notebook in the original Longformer repository and `proj_end_bpc` is the bpc after modifying the global attention. Same naming convention applies for the RoBERTa model. We can see that after training for 1 step on the dev split the bpc diminishes after pre-training. After re-loading, `end_bpc_review` keeps being lower even though its not exactly the same as in `end_bpc` (I think due to the data collator masking different tokens each evaluation). You can reproduce this executing the following line of code after laoding the installed environment: `python convert_model_to_long_updated_github.py --output_dir issue_results --per_device_eval_batch_size 8 --per_device_train_batch_size 2 --gradient_accumulation_steps 1 --seed 1 --learning_rate 0.00003` The behavior we observe for this script is the expected one I think, though in my original multi-node multi-gpu set-up the problem was that end_bpc_review = ini_bpc. In that case, this is the way I invoked the .py script (which is different than the one I shared to account for multi-node multi-gpu distribution): ``` python -m torch.distributed.run $DIST_ARGS ../../convert_model_to_long_updated.py \ --dataset_name_or_loading_script $dataset \ --separate_documents \ --model_name_or_path $model \ --seed $SEED \ --warmup_steps 500 \ --learning_rate $lr \ --weight_decay "0.01" \ --adam_epsilon "1e-6" \ --logging_steps 500 \ --save_strategy steps \ --save_steps 6500 \ --max_grad_norm "5.0" \ --per_device_eval_batch_size 8 \ --per_device_train_batch_size 2 \ --gradient_accumulation_steps $GRADIENT_ACCUM \ --evaluation_strategy steps \ --eval_steps 500 \ --do_train \ --do_eval \ --cache_dir $CACHE_DIR \ --output_dir $OUTPUT_DIR \ --logging_dir $LOGGING_DIR \ --max_steps $STEPS \ --log_on_each_node False \ --save_on_each_node False \ ``` I do not know if I set up something wrong or if indeed there is some issue in the saving process in multi-node settings. Just for context, I work in a cluster with 2 AMD GPU Instinct™ MI50 per node, so we use torch for ROCM5.1.1.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,842
open
Un-pin JAX from <= 0.3.6
### Feature request JAX was pinned to <= 0.3.6 in #16808 when a minor release of JAX was published that broke Optax: https://github.com/huggingface/transformers/blob/8b2501b4b9c5afad0dca2c964bc31b9fca09df4e/setup.py#L123 There have been 18 subsequent minor releases since (_c.f._ https://jax.readthedocs.io/en/latest/changelog.html), with v0.3.24 the latest. We should un-pin this hard requirement to allow users to install the latest version of JAX with Transformers. ### Motivation Should a user have JAX == 0.3.24 installed and then tries to install Transformers using pip: ``` pip install transformers[flax] ``` JAX is downgraded to 0.3.6 due to the pinning requirement. Should a user be using the latest JAX features, this then requires JAX to be _re-upgraded_ to 0.3.24: ``` pip install -U jax ``` The same holds true for JAX dependencies (e.g. Flax or Optax). ### Your contribution Investigate all versions of JAX/Flax/Optax triplets compatible with Transformers. Black-list those that break the CI, otherwise allow users to install that version of JAX in `setup.py`. cc @cgarciae
10-24-2022 14:24:11
10-24-2022 14:24:11
Until the Jax/Flax team guarantees they will maintain compatibility in their ecosystem during releases, they should be an upper pin to avoid sudden breaks of the CI.<|||||>Very much agree @sgugger! I'll maintain an upper-bound on the versions of JAX and JAX derived libraries to ensure there aren't minor releases that break our CI 🤗 We can then deal with new versions of JAX on a version-by-version basis.<|||||>[](url)
transformers
19,841
closed
Update expected values
# What does this PR do? PR #19654 changed some string literals in `LEDModelIntegrationTests.test_seq_to_seq_generation` (Use of `r"""`), which gives different outputs. I think the inputs are different with/without `r"""`, so I just update the expect values in this PR.
10-24-2022 13:22:23
10-24-2022 13:22:23
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,840
closed
Fix OOM in config doctest
# What does this PR do? This model is too large to be tested (OOM). ```python >>> # Initializing a model (with random weights) from the gpt-neox-20b style configuration >>> model = GPTNeoXModel(configuration) # doctest: +SKIP ```
10-24-2022 12:25:10
10-24-2022 12:25:10
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,839
closed
[WIP] Fix edge cases in TopPLogitsWarper when top_p equals 0 or 1
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> This PR fixes edge cases in TopPLogitsWarper when top_p equals 0 or 1. The following are specific contributions: 1. Make the code consistent with the `ValueError` error message. 2. Fix the `nan` problem. In the original PyTorch implementation, if `do_sample=True` and `top_p=0`, then each position in logits will be set to `-float("Inf")`, and then the `nan` problem will be encountered after softmax. 3. Make the three frameworks (PyTorch/TF/FLAX) have the same behavior. In the original implementation, when `do_sample=True` and `top_p=0`, the PyTorch framework would encounter `nan` errors, while TF and FLAX would keep the highest scoring token due to their right shift operations. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante @patrickvonplaten
10-24-2022 11:25:35
10-24-2022 11:25:35
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19839). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @NinedayWang 👋 The edge cases are intentionally breaking -- `top_p=0.0` means in theory that no token can be sampled (and in practice, that they have the same odds), while `top_p=1.0` is equivalent to not having the `top_p` operation. I struggle to see a use case for these changes, but I'm open to suggestions :)<|||||> > Hi @NinedayWang 👋 > > The edge cases are intentionally breaking -- `top_p=0.0` means in theory that no token can be sampled (and in practice, that they have the same odds), while `top_p=1.0` is equivalent to not having the `top_p` operation. > > I struggle to see a use case for these changes, but I'm open to suggestions :) Thanks for your quick reply. @gante I think if the changes are not made, there will be the following problems: 1. When `do_sample=True` and `top_p=0`: (1) In PyTorch, `TopPLogitsWarper` will filter any logit to be `-inf`, causing us to get logits filled with `nan` after softmax, then the program will throw a `nan` exception on `multinomial` operation as follows: ``` Traceback (most recent call last): File "eval_human_eval_wx.py", line 118, in <module> evaluate_on_human_eval( File "eval_human_eval_wx.py", line 98, in evaluate_on_human_eval gen_results = run_code_generation(pipe, input_prompt, num_completions=generate_batch_size, **gen_kwargs) File "eval_human_eval_wx.py", line 44, in run_code_generation code_gens = pipe(prompt, File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 187, in __call__ return super().__call__(text_inputs, **kwargs) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1074, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/pipelines/base.py", line 1081, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/pipelines/base.py", line 990, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/pipelines/text_generation.py", line 229, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/generation_utils.py", line 1422, in generate return self.sample( File "/data/miniconda3/envs/env-3.8.8/lib/python3.8/site-packages/transformers/generation_utils.py", line 2071, in sample next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1) RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 ``` In the 4.22 version of transformers, I used the setting of `do_sample=True` and `top_p=0` ​​to make my program do greedy decoding, because the implementation in version 4.22 uses a right shift operation to make sure that there is at least one token left. But when I updated the version to the latest, I got the above exception and took some time to find the reason. **The above is the main reason for submitting this PR. I agree that "top_p=0.0 means in theory that no token can be sampled", but in practice, we will encounter a `nan` error with no explicit error message, rather than sample each token with the same odds.** (2) In TF and FLAX, the implementation of `TopPLogitsWarper` in both frameworks uses a right-shift operation to achieve top-scoring token preservation, producing the same results as greedy decoding. So using the settings of `do_sample=True` and `top_p=0` will not report an error, which is different from the PyTorch framework. **The behavior of the three frameworks is different now.** 2. When `do_sample=True` and `top_p=1`: Yes, setting `top_p=1` is equivalent to not having the top_p operation, and it does not cause problems other than significantly reducing generation performance. **My change here is more to keep consistency with the error message of `ValueError`.** Additionally, the developers have ensured that `TopPLogitsWarper` will only be performed if `top_p<1` in the `_get_logits_warper` function of `generation_utils.py`: ``` if top_p is not None and top_p < 1.0: warpers.append(TopPLogitsWarper(top_p=top_p, min_tokens_to_keep=(2 if num_beams > 1 else 1))) ``` I just thought it would be better to be consistent.<|||||>@NinedayWang thank you for elaborating -- that makes sense! :) I'd like to suggest two modifications, to then merge the PR: 1. Regarding making the check on `top_p` more strict: because our library is used in production, we must avoid breaking changes whenever we can (which would be the case). Instead, can we raise a warning when `top_p` is either `0.0` (degenerated to argmax token selection) or `1.0` (redundant operation)? 2. Regarding the case where `top_p=0.0`: we both actually forgot the other argument in the discussion above, `min_tokens_to_keep`, which is not being respected. To fix it in this edge case, changing `if self.min_tokens_to_keep > 1:` to `if self.min_tokens_to_keep > 0:` OR forcing `min_tokens_to_keep` to be strictly positive (and removing this `if`) is enough. This change will also make the edge case return back to a greedy decoding-like behavior, as in TF and FLAX, with no exceptions being thrown.<|||||>@gante Thanks for your advice! I will work on this :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@NinedayWang Are you planning to continue this work? :)<|||||>> @NinedayWang Are you planning to continue this work? :) @gante Terribly sorry! Other urgent matters delayed this work. I will continue it and update the progress soon!<|||||>@NinedayWang no worries, take your time :D
transformers
19,838
closed
Add padding image transformation
# What does this PR do? Adds padding to the image transforms library. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
10-24-2022 10:31:42
10-24-2022 10:31:42
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19838). All of your documentation changes will be reflected on that endpoint.<|||||>@NielsRogge I've asked for re-review as the logic has changed a bit from when you first reviewed. <|||||>> Thanks for adding! Do you think it'd be useful to add a pytorch padding vs our implementation equivalence test? @NielsRogge What do you think should be covered wrt equivalence - is there logic you want to make sure is always aligned between the two? These transformations aren't meant to be a np copy of the torch library so there isn't a 1:1 mapping. <|||||>@NielsRogge I'm going to merge. If we decide to add the equivalence tests I'll add in a follow up PR. <|||||>Ok fine, it's just that I was working on a model (#19784) that leverages torch.nn.functional.pad as seen [here](https://github.com/mv-lab/swin2sr/blob/7eeebfba849bbc934ea254ec4cfa8e9d6fc0672c/models/network_swin2sr.py#L891). So it'd be nice to check equivalence between PyTorch and our NumPy implementation.
transformers
19,837
closed
Fix nightly CircleCI
# What does this PR do? Fix a few nightly CircleCI issues The effect could be found [here](https://app.circleci.com/pipelines/github/huggingface/transformers/50144/workflows/aaa65460-bbb5-4048-af3e-9450af06e231).
10-24-2022 10:17:12
10-24-2022 10:17:12
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I accidentally push `GitPython` to `main` https://github.com/huggingface/transformers/commit/6f8064da6b0c8f003731f292acc64f281a2aea65 😨 <|||||>Ah so this is done, perfect!
transformers
19,836
closed
Save model using save_pretrainedmethod.
I trained the DeTr model on custom dataset using this tutorial. [Niels Rogge Fine_tuning_DetrForObjectDetection](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/DETR_minimal_example_(with_DetrFeatureExtractor).ipynb) I was trying to save that model using save_pretrained method, but I'm getting a error. Can someone please help me on how to save a model and load the same for inference using save_pretrained and from_pretrained methods. Thank you so much.
10-24-2022 10:13:51
10-24-2022 10:13:51
Please use the [forums](https://discuss.huggingface.co/) to get help debugging your code :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,835
closed
Display the number of trainable parameters in Trainer when lauching a training
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> When launching a training with an instance of the `Trainer` class, a small recap is displayed with various pieces of information (total train batch size, total optimization steps, etc...). This PR adds to this recap the number of trainable parameters of the model because this is not always mentioned in the model card or in the documentation and I think it is a valuable figure to have in all runs. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-24-2022 08:11:53
10-24-2022 08:11:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,834
closed
support loading pretrained model from fsspec paths
### Feature request Make `from_pretrained` works with fsspec paths. ``` tokenizer = BartTokenizerFast.from_pretrained("hdfs://..../bert-base-uncased") ``` ### Motivation To make transformers suitable for usage in the cloud or in clusters where files should be stored in HDFS or S3. ### Your contribution I'll be willing to submit a PR but `from_pretrained` is a very complicated function so I may need assistance.
10-24-2022 07:49:50
10-24-2022 07:49:50
Thank you, but we're not interested in expanding support to other tools than the Hugging Face Hub :-)<|||||>I'd like to use Hub too, but our cluster have no public internet access...<|||||>> I'd like to use Hub too, but our cluster have no public internet access... hai, can you help how to load the pretrain model from hdfs path<|||||>Still no way to do that. I have to use pytorch_lightning's ModelCheckpoint callback to load & save. You can pre-process your pretrained model into a pytorch `.py` file then load it with `torch.load(fsspec.open('xxx'))`.<|||||>> thanks so much for your reply , can you teach more detail please ,thanks<|||||>hha, I have got an simple solution without changing the source code , just use spark submit : --conf spark.yarn.dist.archives=hdfs://your_own_hdfs_path/your_pretraining_model.zip#your_pretraining_model
transformers
19,833
open
Position embedding in the DETR model
### System Info According to the argument definition of the `DetrDecoderLayer.forward()` specified here: https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/detr/modeling_detr.py#L723-L728 The `positional_embeddings` argument for the cross-attention should be assigned by the `position_embeddings` variable instead of `query_position_embeddings `. https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/detr/modeling_detr.py#L757-L764 Is this an error in the argument definition or the code part? Thank you! ### Who can help? @NielsRogge ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction It is from the transformers code. Arguments definition: https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/detr/modeling_detr.py#L723-L728 Cross-attention code: https://github.com/huggingface/transformers/blob/bd469c40659ce76c81f69c7726759d249b4aef49/src/transformers/models/detr/modeling_detr.py#L757-L764 ### Expected behavior Either: 1. The `positional_embeddings` argument for the cross-attention should be assigned by the `position_embeddings` variable instead of `query_position_embeddings `, or 2. Update the documentation of the argument to the correct one.
10-24-2022 05:11:40
10-24-2022 05:11:40
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @NielsRogge , could you explain how to solve this issue? You just put the Goo first issue label on it but it's not clear what a contributor would have to do to fix it.<|||||>Hi @NielsRogge, I would like to take on this. As Sylvain suggested, could you offer some context on how to go with this? Thanks :)<|||||>Yeah I marked this as good first issue as someone could take a deeper dive into DETR's position embeddings. Reading the [paper](https://arxiv.org/abs/2005.12872) for that could definitely be helpful. But the implementation is correct, it's probably internal variables/docstrings that need to be updated. From the paper: > Since the decoder is also permutation-invariant, the N input embeddings must be different to produce different results. These input embeddings are learnt positional encodings that we refer to as object queries, and similarly to the encoder, we add them to the input of each attention layer. So the `position_embeddings` argument of the cross-attention layer are exactly these input embeddings, often also called "content embeddings" or "object queries". Then a bit later on in the paper they state: > There are two kinds of positional encodings in our model: spatial positional encodings and output positional encodings (object queries). So the `key_value_position_embeddings` arguments of the cross-attention layer refer to these spatial position encodings. These are added to the keys and values in the cross-attention operation. So we could for clarity update the "position_embeddings" argument to "object_queries", and the "key_value_position_embeddings" argument to "spatial_position_embeddings"<|||||>Hello @daspartho @NielsRogge , wanted to inquire as to whether any progress was made on this? I'd like to take a look.<|||||>Hello @NielsRogge , I am currently working on this issue. I've read the article and I do understand what has to be changed. My question is if we only have to change the `DetrDecoderLayer` class (in the respective `forward` function mentioned above or al position_embeddings args have to change too. I did some local tests too, and noted that changing only in the function forward i mentioned to `object_queries` and `spatial_position_embeddings`, many tests broke because of wrong arguments passed since names changed. In order to change these arguments, we need to change them in tests? I looked up some tests, but I do think the problem is in the code itself, since classes related to that one would be passing arguments wrongly. This is my first contribution to an open source project this size, and I'm really happy to do it. Thanks in advance.<|||||>Hey @NielsRogge is this issue still open? If yes can I take this?<|||||>Hey @hackpk I'm finishing touches in my PR to fix this Issue, so Idk...<|||||>That's great.I'll look for another issue then. Thanks. <|||||>No problem, good luck :D
transformers
19,832
closed
Can we add optional kwargs to various model in addition to their required fixed inputs?
### Feature request For example in class BertForMaskedLM, we have to input needed arguments as follows: def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): I wonder if it's possible to add optional **\*\*kwargs** so that we could customize model layer easier. Like this: def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None, **kwargs, ): ### Motivation When I want to add an additional operation to a sub module in a big whole model, I need to use extra data as input. But to implement this, I have to add the extra inputs to every module so they can be finally transferred to the correct module. I don't know whether there are more efficient ways to solve the problem. If so, I'd be appriciated for you to point it out. ### Your contribution Maybe None
10-24-2022 03:28:43
10-24-2022 03:28:43
We can't add blank kwargs like this, as a user who makes a typo in their inputs will then not get any error and not realize they did something wrong.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Yeah, now I see the hidden trouble if we add blank kwargs to a model. But what about adding some arguments that are specified by users? For example, we add an argument "custom_arg=None" to __init__(```, custom_arg=None), and then the model will add the argument to its forward function like "self.forward = partial(self.forward, custom_arg=None)", and finally this argument is iteratively passed to all the submodel. Maybe in this way we could keep the input safety for unaware mistakes but increase a model's flexibility?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,831
closed
[WIP] Donut flax implementation
This PR adds Jax support for [Donut Model](https://huggingface.co/docs/transformers/model_doc/donut). This work is very much in progress, I need to add documentation and I am still not sure if I have added adequate number of test for the changes I have made so far, so it would be great, if someone can look up and may be provide some feedback in terms of quality of code and general direction Things which are done - Code is functional and passes integration test with existing Donut models in HF hub Things which are not done - Update documentation - Clear list of TODOs Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
10-23-2022 16:58:20
10-23-2022 16:58:20
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19831). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,830
closed
tictac game
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-23-2022 13:51:27
10-23-2022 13:51:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>I don't see the link with Transformers.
transformers
19,829
closed
Improve check copies
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> print first diff line intead of first code part line ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-23-2022 13:29:13
10-23-2022 13:29:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19829). All of your documentation changes will be reflected on that endpoint.
transformers
19,828
closed
simplify dpt copying
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-23-2022 13:11:50
10-23-2022 13:11:50
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for your PR, but we don't accept renaming layers like this as it's a breaking change. Why? It's even not change `DPTPreTrainedModel`, so probabiity that some one use inner layers not so big.
transformers
19,827
closed
run newer black
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Run newer black. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-23-2022 13:09:21
10-23-2022 13:09:21
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19827). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for your PR. We won't switch black versions until the end of the year (as was mentioned in previous PRs like this one) when we switch to the 2023 version, as it makes every standing PR conflict with main.<|||||>> Thanks for your PR. We won't switch black versions until the end of the year (as was mentioned in previous PRs like this one) when we switch to the 2023 version, as it makes every standing PR conflict with main. I think it's not big problem here, diff not so big so probability that someone changes something in that lines not so big. Also it's produce too simple merge conflicts.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,826
closed
fix bart compatibility with numpy tensors
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Changes only on `modeling_bart.py` file. Other changes just by copying mechanics. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-23-2022 11:22:43
10-23-2022 11:22:43
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19826). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for your PR, but PyTorch models do not accept NumPy arrays as inputs, and your changes won't make any difference for that I believe.<|||||>> but PyTorch models do not accept NumPy arrays as inputs, and your changes won't make any difference for that I believe. Yep, it's true. But in most of this cases would be produced more clear error than `'int' object is not callable` caused by `size` property of numpy array<|||||>@sgugger, ping<|||||>I think I have been pretty clear on why we don't want this change. PyTorch models do not support NumPy arrays and we are not interested in replacing all the `size()` by `shape`.<|||||>> I think I have been pretty clear on why we don't want this change. PyTorch models do not support NumPy arrays and we are not interested in replacing all the `size()` by `shape`. Yep, I understand it. This PR only about more clear errors on numpy tensors. Nothing else. So I think it's make little easier to debug.<|||||>@sgugger, ping<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,825
closed
ImportError: cannot import name 'PegasusTokenizer' from 'transformers'
@patrickvonplaten Tried: from transformers import PegasusTokenizer import torch Output: --------------------------------------------------------------------------- ImportError Traceback (most recent call last) /tmp/ipykernel_7943/3816365261.py in <cell line: 1>() ----> 1 from transformers import PegasusTokenizer 2 import torch ImportError: cannot import name 'PegasusTokenizer' from 'transformers'
10-23-2022 09:25:30
10-23-2022 09:25:30
Can you check transformers version and update it, if too old? It's work fine for me: ``` In [1]: from transformers import PegasusTokenizer In [2]: import transformers In [3]: transformers.__version__ Out[3]: '4.23.1' ```<|||||>Gently pinging @ArthurZucker here<|||||>Hey, as mentioned you are probably using an old version of transformers. I am not able to reproduce this bug. You should use `transformers>=3.1` as `Pegasus` was first introduced in the [release 3.1](https://github.com/huggingface/transformers/releases?q=PegasusForConditionalGeneration&expanded=true)<|||||>Did you find any solution for this problem? <|||||>Yes, as mentioned in my previous answer, updating `transformers` 🤗 <|||||>> Yes, as mentioned in my previous answer, updating `transformers` hugs This worked for me. Thanks!<|||||>Thank you. It seems to work for me too!
transformers
19,824
closed
Added translation of converting_tensorflow_models.mdx to Portuguese Issue #16824
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #16824 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-23-2022 05:47:23
10-23-2022 05:47:23
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,822
closed
data_collator.py:131: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow
### System Info - `transformers` version: 4.23.1 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction https://colab.research.google.com/drive/1gK9iXBiIEmmt2OMq6wmxwcpslkchHJh-?usp=sharing See the output of `trainer.train()` ### Expected behavior The warning should not occur.
10-23-2022 03:13:58
10-23-2022 03:13:58
Thanks for the report, the PR linked above should fix it.
transformers
19,821
closed
Spanish translation of multiple_choice.mdx, question_answering.mdx.
# What does this PR do? Translates `multiple_choice.mdx` and `question_answering.mdx` into Spanish. Also updates the `_toctree.yml` file accordingly. Fixes #15947 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? @osanseviero @sgugger
10-23-2022 02:12:46
10-23-2022 02:12:46
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @osanseviero, I just updated all files to reflect your suggestions :) <|||||>Thanks a lot! :fire: <|||||>@sgugger thanks for the merge! However, shouldn't the original issue (#15947) addressing the translations remain open? Think it was closed automatically bc I referenced it here.<|||||>Yes, that's because you used the word "Fixes". You should have said something like "Related to ..." :-)<|||||>Oops, sorry! Lesson learned :')
transformers
19,820
closed
fix broken links in testing.mdx
# What does this PR do? fix a broken link in https://huggingface.co/docs/transformers/main/en/testing I also found the link I marked out in the picture below is also broken, but I don't know how to fix it. You can go to "How transformers are tested" from [here](https://huggingface.co/docs/transformers/main/en/testing#how-transformers-are-tested) ![image](https://user-images.githubusercontent.com/30254428/197367017-d0aa691d-54cf-48f9-a4f4-89ec693396d6.png) ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-23-2022 00:14:38
10-23-2022 00:14:38
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,819
closed
fix vision enc-dec models conversion to onnx
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #19811 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-22-2022 22:04:31
10-22-2022 22:04:31
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19819). All of your documentation changes will be reflected on that endpoint.<|||||>cc @lewtun <|||||>Hello @kventinel , thanks for the PR. I am currently looking for the same thing for Whisper model and was going to update the VisionEncoderDecoderModel soon. The better way would be to run the full model by passing in the `encoder_outputs`. It gives you decoder + other parts but skips the encoder. Hence, it would be more generic than creating separate functions. You can follow up on the PR's [19525](https://github.com/huggingface/transformers/pull/19525) and [420](https://github.com/huggingface/optimum/pull/420/files#diff-c27ea812737bc6ccfe34f92a4ff0d1ec473a41b8c8012bfdb08bb22a46104ddeR321) for the changes for adding the encoder_outputs. You could add the encoder_outputs export for the model in a similar manner after the PR 19525 is merged.<|||||>Hi @kventinel , since the PR [19525](https://github.com/huggingface/transformers/pull/19525) is merged would you like to update the model config to use `encoder_outputs`? Let me know if you have any questions.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,818
closed
Type hints
# What does this PR do? Type-hints for `realm`, `Speech2Text2`, `SpeechToText` and `speech-encoder-decoder` @Rocketknight1
10-22-2022 21:18:19
10-22-2022 21:18:19
Not sure why tests are failing<|||||>Hi @IMvision12, I think tests are failing because in some cases you overwrote the default argument values! It's easy to see if you look in the [files changed interface](https://github.com/huggingface/transformers/pull/19818/files). If you set those argument values back to their original defaults then tests should pass. ![image](https://user-images.githubusercontent.com/12866554/197528126-065ac6b4-e277-4c0b-8648-177447dca744.png) <|||||>Okay I will change that. <|||||>@Rocketknight1 Tests are still failing<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 Done!!
transformers
19,817
closed
[Doctest] Add configuration_maskformer.py
# What does this PR do? Adds `configuration_maskformer.py` to `utils/documentation_tests.txt` Based on #19487 @ydshieh can you please review? thanks :) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-22-2022 20:42:39
10-22-2022 20:42:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,816
closed
Corrected spelling errors in README_ko.md
1. 독립적이여서 to 독립적 이어서 2. 마스킹된 to 마스킹 된 3. 파이썬 to 파이선 # What does this PR do? Fixed typos and spelling mistakes <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-22-2022 18:39:44
10-22-2022 18:39:44
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19816). All of your documentation changes will be reflected on that endpoint.<|||||>@eunseojo could you give a quick look?
transformers
19,815
closed
Faster TrOCR (in particular) and faster batch text generation (in general)
### Feature request Text generation in auto-regressive decoders can be optimized by not computing next token logits for sentences that already reached `eos` or `pad`. This is a [notebook](https://github.com/IlyasMoutawwakil/Faster-TrOCR/blob/main/Faster_TrOCR_with_ONNX%2BAutoRegressive_Hack.ipynb) where I did my experiments on performances. The last part contains a modified `forward` method of the `VisionEncoderDecoder` class. One important thing to note is that it should only be used in inference (eval) and not training phase. ### Motivation I have been using TrOCR for a while and was trying to make it faster. I tried ONNXizing it but unfortunately that only made it slower (ONNX communication bottleneck ?). I digged deeper in the source code and noticed that when generating text, it computes the next token logits of the whole batch (I had batches of different length text lines because I was using it on extracted text lines from documents) which is not necessary and only makes sense in the case of a batch of a fixed text length. ### Your contribution I made a class where I overrode the `forward` function of the `VisionEncoderDecoder` class and it worked (half the compute time). This is the performance of the native implementation on a 17 text line document: ```python %timeit preprocessor.tokenizer.batch_decode( \ cuda_hf_model.generate( \ pixel_values.to('cuda'), \ max_length=96 \ ), \ skip_special_tokens=True \ ) # 1.68 s ± 2.98 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` This is the performance of the modified implementation: ```python %timeit preprocessor.tokenizer.batch_decode( \ modified_cuda_hf_trocr.generate( \ pixel_values.to('cuda'), \ max_length=96, \ ), \ skip_special_tokens=True) # 894 ms ± 1.21 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) ``` I tried it on both CUDA and CPU and the gain is even greater on CPU.
10-22-2022 17:04:47
10-22-2022 17:04:47
cc @NielsRogge <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,814
closed
t5 model's decoder do not use EncDecAttention.key & value in text generation task
### System Info - `transformers` version: 4.20.1 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.9.10 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.13.0.dev20220709 (False) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patrickvonplaten, @Narsil, @gante ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I use `transformers.T5ForConditionalGeneration` to do inference in text generation task(NaturalQuestions). We all know that the longer the answer is the more times the `decoder` goes `forward`. But when I tried to log the forward in each layer. I found that the model goes `encoder` once, `A type decoder` once, `B type decoder` many times based on the answer length. ### Expected behavior `A type decoder` use EncDecAttention. q,k,v,o ![image](https://user-images.githubusercontent.com/84232793/197341985-bf12427d-2323-475c-865a-e3a0a61d7bde.png) But `B type decoder` only use EncDecAttention. q,o ![image](https://user-images.githubusercontent.com/84232793/197341988-d9466cfa-c217-4cc2-ae73-296b50d4c77c.png) Many thanks!! :)
10-22-2022 13:36:14
10-22-2022 13:36:14
Hi @CaffreyR 👋 Our models, when used for generation purposes, have a cache that stores repeated computations (key and value of the attentions) turned on by default. If you'd wish to see the full (redundant) computations being executed, try using the model with `use_cache=False`.<|||||>Great! Thanks for your help!
transformers
19,813
closed
Create 'n'numbers_sumfinder.py
For hacktoberfest # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-22-2022 09:50:47
10-22-2022 09:50:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, we do not need this module.
transformers
19,812
closed
transformers.data.metrics: replace mention of 🤗 Datasets in DEPRECATION_WARNING with 🤗 Evaluate
# What does this PR do? Changes DEPRECATION_WARNING in `src/transformers/data/metrics/__init__.py` to point to 🤗 Evaluate for metrics functionality instead of 🤗 Datasets, whose metrics functionality has been deprecated and moved to 🤗 Evaluate since the metrics in this file were deprecated in 🤗 Transformers ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @albertvillanova
10-22-2022 07:37:40
10-22-2022 07:37:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>Not sure what's causing that test failure.
transformers
19,811
closed
ONNX conversion from VisionEncoderDecoderModel with different dimensions
### System Info - `transformers` version: 4.23.0.dev0 - Platform: Linux-4.4.0-87-generic-x86_64-with-glibc2.23 - Python version: 3.9.13 - Huggingface_hub version: 0.10.0 - PyTorch version (GPU?): 1.12.1 (True) ### Who can help? @NielsRogge, @patrickvonplaten ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction I am trying to convert a VisionEncoderDecoder model to ONNX using the feature that has been recently merged https://github.com/huggingface/transformers/pull/19254. However, when two pretrained models whose model dimensions are different, It reproduces errors as below. ## Model Load & Save ```python from transformers import VisionEncoderDecoderModel, BertTokenizer, AutoFeatureExtractor encoder_name_or_path = "hf-internal-testing/tiny-random-vit" decoder_name_or_path = "fnlp/bart-base-chinese" model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( encoder_name_or_path, decoder_name_or_path, ) tokenizer = BertTokenizer.from_pretrained(decoder_name_or_path) feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_name_or_path) output_dir = "outputs" model.save_pretrained(output_dir) feature_extractor.save_pretrained(output_dir) tokenizer.save_pretrained(output_dir) ``` ## Model Structure ``` VisionEncoderDecoderModel( (encoder): SwinModel(...) (decoder): BartForCausalLM(...) (enc_to_dec_proj): Linear(in_features=32, out_features=768, bias=True) ) ``` There exists a new linear layer to project encoder hidden states in [modeling_vision_encoder_decoder.py#L217](https://github.com/huggingface/transformers/blob/main/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L217) ```python # encoder outputs might need to be projected to different dimension for decoder if ( self.encoder.config.hidden_size != self.decoder.config.hidden_size and self.decoder.config.cross_attention_hidden_size is None ): self.enc_to_dec_proj = nn.Linear(self.encoder.config.hidden_size, self.decoder.config.hidden_size) ``` ## Conversion to ONNX ```bash python -m transformers.onnx --model=outputs/ --feature=vision2seq-lm onnx/ --atol 1e-3 ``` Output: ```bash Traceback (most recent call last): File "/home/user/anaconda3/envs/swinocr/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/user/anaconda3/envs/swinocr/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/backup2/mkf/transformers/src/transformers/onnx/__main__.py", line 180, in <module> main() File "/backup2/mkf/transformers/src/transformers/onnx/__main__.py", line 118, in main onnx_inputs, onnx_outputs = export( File "/backup2/mkf/transformers/src/transformers/onnx/convert.py", line 339, in export return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device) File "/backup2/mkf/transformers/src/transformers/onnx/convert.py", line 192, in export_pytorch onnx_export( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/onnx/__init__.py", line 350, in export return utils.export( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/onnx/utils.py", line 163, in export _export( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/onnx/utils.py", line 1074, in _export graph, params_dict, torch_out = _model_to_graph( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/onnx/utils.py", line 727, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/onnx/utils.py", line 602, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/onnx/utils.py", line 517, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/jit/_trace.py", line 1175, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/jit/_trace.py", line 127, in forward graph, out = torch._C._create_graph_by_tracing( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/jit/_trace.py", line 118, in wrapper outs.append(self.inner(*trace_inputs)) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) File "/backup2/mkf/transformers/src/transformers/models/bart/modeling_bart.py", line 1851, in forward outputs = self.model.decoder( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) File "/backup2/mkf/transformers/src/transformers/models/bart/modeling_bart.py", line 1104, in forward layer_outputs = decoder_layer( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) File "/backup2/mkf/transformers/src/transformers/models/bart/modeling_bart.py", line 439, in forward hidden_states, cross_attn_weights, cross_attn_present_key_value = self.encoder_attn( File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) File "/backup2/mkf/transformers/src/transformers/models/bart/modeling_bart.py", line 201, in forward key_states = self._shape(self.k_proj(key_value_states), -1, bsz) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) File "/home/user/anaconda3/envs/swinocr/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: mat1 and mat2 shapes cannot be multiplied (16x32 and 768x768) ``` ### Expected behavior It seems that the existing ONNX conversion for EncoderDecoderModel only converts the encoder and decoder, and ignores this linear layer. If I change the model to [microsoft/trocr-base-handwritten](https://huggingface.co/microsoft/trocr-base-handwritten), which has a similar structure and the same dimensions (i.e. no linear layer), the conversion works. ```bash python -m transformers.onnx --model=microsoft/trocr-base-handwritten --feature=vision2seq-lm trocr_onnx/ --atol 1e-3 ``` Thanks a lot for looking into it :)
10-22-2022 07:26:51
10-22-2022 07:26:51
I'm try to fix your problem in #19819<|||||>Cc @mht-sharma <|||||>I've come across an issue with the ONNX conversion of TrOCR-base. I'm not sure if they are entirely related, but I've managed to convert the models with huggingface.onnx into actual onnx files. After conversion I obtain an `encoder.onnx` and a `decoder.onnx` file, which seems to be as it should be. Then, for processing an input image I do the following: 1. Send image through the processor: `processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-str")` `processor_output = processor(img)` 2. Send output of processor through encoder `encoder_output = encoder(processor_output )` 3. Send output of encoder through the decoder. `decoder_output = decoder(trace_tensor)` The conversion happens like this: ``` python -m transformers.onnx --model=microsoft/trocr-base-str --feature=vision2seq-lm models/onnx --atol 1e-3 ``` This yields 2 errors. 1 after running the encoder (the error that occurs breaks the process). A second one occurs with the decoder. For this I have created a tensor that mimics the input shapes of the batch that the decoder expects. ERROR ENCODER: ``` onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_937' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\cpu\tensor\reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{2,100,100,64}, requested shape:{1,9216,128} ``` ERROR DECODER: ``` File "C:\Users\Miniconda3\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 196, in run raise ValueError("Model requires {} inputs. Input Feed contains {}".format(num_required_inputs, num_inputs)) ValueError: Model requires 3 inputs. Input Feed contains 1 ``` @NielsRogge @mht-sharma any clues on this?<|||||>> I've come across an issue with the ONNX conversion of TrOCR-base. I'm not sure if they are entirely related, but I've managed to convert the models with huggingface.onnx into actual onnx files. After conversion I obtain an `encoder.onnx` and a `decoder.onnx` file, which seems to be as it should be. > > Then, for processing an input image I do the following: > > 1. Send image through the processor: > `processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-str")` > `processor_output = processor(img)` > 2. Send output of processor through encoder > `encoder_output = encoder(processor_output )` > 3. Send output of encoder through the decoder. > `decoder_output = decoder(trace_tensor)` > > The conversion happens like this: > > ``` > python -m transformers.onnx --model=microsoft/trocr-base-str > --feature=vision2seq-lm models/onnx > --atol 1e-3 > ``` > > This yields 2 errors. 1 after running the encoder (the error that occurs breaks the process). A second one occurs with the decoder. For this I have created a tensor that mimics the input shapes of the batch that the decoder expects. > > ERROR ENCODER: > > ``` > onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_937' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\cpu\tensor\reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{2,100,100,64}, requested shape:{1,9216,128} > ``` > > ERROR DECODER: > > ``` > File "C:\Users\Miniconda3\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 196, in run > raise ValueError("Model requires {} inputs. Input Feed contains {}".format(num_required_inputs, num_inputs)) > ValueError: Model requires 3 inputs. Input Feed contains 1 > ``` > > @NielsRogge @mht-sharma any clues on this? Hi @Fritskee, the decoder ONNX model expects 3 inputs during inference, namely: `input_ids`, `attention_mask` & `encoder_hidden_states` (output from the encoder model). Hence, the above error. You need to provide the appropriate start token id and attention mask as input to the decoder to start the sequence. Encoder error: Please ensure if you have followed appropriate steps to generate the input. For the above model, the following steps worked for me: ```python # load image from the IIIT-5k dataset url = 'https://i.postimg.cc/ZKwLg2Gw/367-14.png' image = Image.open(requests.get(url, stream=True).raw).convert("RGB") pixel_values = processor(images=image, return_tensors="pt").pixel_values ``` For more info check: [trocr-base-str](https://huggingface.co/microsoft/trocr-base-str) Let me know if there are additional issues with the model. <|||||>@mht-sharma Thanks for you assistance. I just tried the suggested code of yours for the encoder part. The issue remains exactly the same. To make sure we're talking about the same thing, I also used the image that you pulled from the web. I'm now running the following code: ```py import requests import numpy as np import onnxruntime as onnxrt from PIL import Image from transformers import TrOCRProcessor import config as c class OnnxModel(): def __init__(self, model_path): self.model = onnxrt.InferenceSession(model_path) def __call__(self, img): onnx_inputs = {self.model.get_inputs()[0].name: np.asarray(img, dtype='float32')} onnx_output = self.model.run(None, onnx_inputs)[0] return onnx_output if __name__ == "__main__": processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-str") encoder = OnnxModel(c.encoder_path) url = 'https://i.postimg.cc/ZKwLg2Gw/367-14.png' image = Image.open(requests.get(url, stream=True).raw).convert("RGB") pixel_values = processor(images=image, return_tensors="pt").pixel_values encoder_output = encoder(pixel_values) ``` Running this code, gives me exactly the same error. Namely: `onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_937' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\cpu\tensor\reshape_helper.h:41 onnxruntime::ReshapeHelper::ReshapeHelper gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{2,100,100,64}, requested shape:{1,9216,128}` I also looked at the link of trocr-base-str that you provided. In there I see that they do `model.generate(pixel_values)`, but this is not possible with ONNX, since the 'InferenceSession' object that is used with onnx has no attribute 'generate'. <|||||>@Fritskee Thanks for trying the suggestion. I have tried your code and it works for me. I would mention the steps I use for exporting and running the inference. 1. Clone the transformers and install it from source. 2. Export the model ```python python -m transformers.onnx --model=microsoft/trocr-base-str --feature=vision2seq-lm models_trocr_base --atol 1e-3 ``` 3. Ran inference using your code. I updated the following line in your code because of version changes. ```python self.model = onnxrt.InferenceSession(model_path, providers=["CPUExecutionProvider"]) ``` Also please find the onnx, torch and onnxruntime version I am using. ```bash onnx==1.12.0 onnxruntime==1.12.1 torch==1.12.1 ``` For running inference using the 2 models, as of now, you'll have to roll your own generation loop with onnxruntime. An alternative would be to implement an ORTModelForVisionSeq2Seq in optimum, similar to how Whisper is being implemented: https://github.com/huggingface/optimum/pull/420/files#diff-77c4bfa5fbc9262eda15bbbc01d9796a0daa33e6725ca41e1cfe600a702d0bfc <|||||>@mht-sharma I just created a new conda env, matched your package versions and installed hf/transformers from source. Did the job! I now get a `(1, 577, 768)` tensor at the output of the encoder. Thanks for your assistance! Will try to figure out the decoder part now!<|||||>@mht-sharma I am facing one final issue with the decoder, which is also a shape issue. I'm doing the following to infere with ONNX. I checked the code of [optimum](https://github.com/huggingface/optimum/pull/420/files#diff-77c4bfa5fbc9262eda15bbbc01d9796a0daa33e6725ca41e1cfe600a702d0bfc) as per your suggestion. From this I implemented my callable functionality of the `OnnxDecoder`. I also checked the keys of the `input_names` of the onnx names and all this is correct. However, at the end I keep on getting an issue with size mismatches. The way I currently understand it, this cannot be solved since the (1, 577, 1024) shape contains the prime number 577. This makes it impossible to find an integer dimension that can match the shape (x, -1, 16, 64). Additionally, the `input_ids` shape is limited to 514 in either dimension. Thus unless I use dimension 1, I cannot create a working dimension. But then using dimension 1 makes the runtime way too long. Any clue on this one? ```py import requests import numpy as np import onnxruntime as onnxrt from PIL import Image from transformers import TrOCRProcessor import config as c class OnnxDecoder(): def __init__(self, model_path): self.model = onnxrt.InferenceSession(model_path, providers=["CPUExecutionProvider"]) self.input_names = {input_key.name: idx for idx, input_key in enumerate(self.model.get_inputs())} def __call__(self, input_ids: torch.LongTensor, encoder_hidden_states: torch.FloatTensor, attention_mask: torch.LongTensor): onnx_inputs = {"input_ids": input_ids.cpu().detach().numpy()} if "attention_mask" in self.input_names: onnx_inputs["attention_mask"] = attention_mask.cpu().detach().numpy() if "encoder_hidden_states" in self.input_names: onnx_inputs["encoder_hidden_states"] = encoder_hidden_states.cpu().detach().numpy() onnx_output = self.model.run(None, onnx_inputs) return onnx_output if __name__ == "__main__": processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-str") encoder = OnnxModel(c.encoder_path) url = 'https://i.postimg.cc/ZKwLg2Gw/367-14.png' image = Image.open(requests.get(url, stream=True).raw).convert("RGB") pixel_values = processor(images=image, return_tensors="pt").pixel_values encoder_output = encoder(pixel_values) decoder_output = decoder(input_ids=torch.LongTensor(np.random.rand(512, 512)), encoder_hidden_states=torch.FloatTensor(encoder_output), attention_mask=torch.LongTensor(np.random.rand(512, 512))) ``` ERROR: `onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_623' Status Message: D:\a\_work\1\s\onnxruntime\core\providers\cpu\tensor\reshape_helper.h:36 onnxruntime::ReshapeHelper::ReshapeHelper size != 0 && (input_shape.Size() % size) == 0 was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,577,1024}, requested shape:{512,-1,16,64}` I am aware that using tensors with random values will result in a wrong output, but at this moment I'm just trying it to get to run and check the inference speed of the onnx model. <|||||>Hello @Fritskee the issue is because of the sample input. The `input_ids` and `attention_mask` expects input of size `<batch_size, sequence_length>`. In your above snippet you have created an input of batch size 512, however, the `encoder_hidden_states` are of batch size 1, hence the error. Try creating input with batch size 1 and it should work. Additionally, please use `torch.ones` to generate the `attention_mask` input, with same shape as input_ids.<|||||>> Hello @Fritskee the issue is because of the sample input. The `input_ids` and `attention_mask` expects input of size `<batch_size, sequence_length>`. > > In your above snippet you have created an input of batch size 512, however, the `encoder_hidden_states` are of batch size 1, hence the error. Try creating input with batch size 1 and it should work. > > Additionally, please use `torch.ones` to generate the `attention_mask` input, with same shape as input_ids. @mht-sharma This is indeed the error that I made. Thanks for pointing that out! I do notice that I am getting better inference times with the Huggingface pytorch model, than with the ONNX model. Which is something I've never encountered. Generally ONNX always outperforms PyTorch for inference. ONNX runs in 2.5 sec, PyTorch runs in 1.8 sec. Both on the same CPU.<|||||>Hello @Fritskee I am able to observe speedup on both cpu and gpu with the model. Could you please share your inference / benchmarking code if possible for testing? <|||||>> Hello @Fritskee I am able to observe speedup on both cpu and gpu with the model. Could you please share your inference / benchmarking code if possible for testing? Apologies for the late reply @mht-sharma, I use following code to run inference of the **ONNX model:** ```py if __name__ == "__main__": image = Image.open(r"C:\Users\local_img.png").convert("RGB") processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-str") encoder = OnnxEncoder(c.encoder_path) decoder = OnnxDecoder(c.decoder_path) start = time.time() pixel_values = processor(images=image, return_tensors="pt").pixel_values encoder_output = encoder(pixel_values) decoder_output = decoder(input_ids=torch.LongTensor(np.random.rand(1,384)), encoder_hidden_states=torch.FloatTensor(encoder_output), attention_mask=torch.LongTensor(np.ones((1,384)))) end = time.time() ``` This code is used to run inference with the **PyTorch/HuggingFace model:** ```py if __name__ == "__main__": image = Image.open(r"C:\Users\local_img.png").convert("RGB") processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-str') model = VisionEncoderDecoderModel.from_pretrained( r"C:\Users\Downloads\text-model\text-model\pytorch_model.bin", config=r"C:\Users\Downloads\text-model\text-model\config.json") start = time.time() pixel_values = processor(images=image, return_tensors="pt").pixel_values generated_ids = model.generate(pixel_values) model_output = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] end = time.time() ``` I got an avg. inference time of **3.339415764808655 seconds** over 100 runs for the **ONNX model**. Similarly, I got an avg. inference time of **2.542718324661255** over 100 runs for the **torch/HF model**. Both ran on the exact same machine, on CPU, while no other processes were running in the background. <|||||> Hi @Fritskee , for ORT inference you'll have to roll your own generation loop with ONNX Runtime to run the inference. The above code runs decoder with SL 384 with one forward pass which will give you incorrect results. You can wrap your ORTEncode and ORTDecoder in a ORTModelForVision2Seq ```python class ORTModelForVision2Seq(VisionEncoderDecoderModel): def __init__(self, *args, **kwargs): config = AutoConfig.from_pretrained(model_name) super().__init__(config) self._device = "cpu" self.encoder = ORTEncoder() self.decoder = ORTDecoder() def forward( self, pixel_values: Optional[torch.FloatTensor] = None, decoder_input_ids: Optional[torch.LongTensor] = None, encoder_outputs: Optional[Tuple[Tuple[torch.Tensor]]] = None, **kwargs, ) -> Seq2SeqLMOutput: # Encode if needed : first prediction pass if encoder_outputs is None: encoder_outputs = self.encoder(pixel_values=pixel_values) # Decode decoder_attention_mask = decoder_input_ids.new_ones(decoder_input_ids.shape) decoder_outputs = self.decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_outputs.last_hidden_state, ) return Seq2SeqLMOutput( logits=decoder_outputs.logits, ) def prepare_inputs_for_generation(self, input_ids, attention_mask=None, encoder_outputs=None, **kwargs): return { "decoder_input_ids": input_ids, "decoder_atttention_mask": input_ids, "encoder_outputs": encoder_outputs, } model = ORTModelForVision2Seq() start = time.time() model.config.decoder_start_token_id = 2 model.config.pad_token_id = processor.tokenizer.pad_token_id model.config.eos_token_id = processor.tokenizer.sep_token_id model.config.vocab_size = model.config.decoder.vocab_size generated_ids = model.generate(pixel_values.to(device)) end = time.time() ``` The class would be soon implemented in the `optimum` soon for easier inference. Stay tuned! <|||||>@mht-sharma Thanks for the example! I tried implementing it. For the further implementation I looked at the [optimum/pipelines.py](https://github.com/huggingface/optimum/blob/816268d7c3aba0de98f2d74db06344e76f071535/optimum/pipelines.py) and at [optimum/onnxruntime/modeling_seq2seq.py](https://github.com/huggingface/optimum/blob/816268d7c3aba0de98f2d74db06344e76f071535/optimum/onnxruntime/modeling_seq2seq.py). Basically I took the examples from `modeling_seq2seq.py` for the `ORTEncoder` and `ORTDecoder`, and I took your example from above and initialize the `ORTModelForVision2Seq(VisionEncoderDecoderModel)` like this: ```py class ORTModelForVision2Seq(VisionEncoderDecoderModel): def __init__(self, *args, **kwargs): config = AutoConfig.from_pretrained('microsoft/trocr-base-str') super().__init__(config) self._device = "cpu" self.encoder = ORTEncoder(onnxruntime.InferenceSession(c.encoder_path, providers=["CPUExecutionProvider"]), device='cpu') self.decoder = ORTDecoder(onnxruntime.InferenceSession(c.decoder_path, providers=["CPUExecutionProvider"]), device='cpu') ``` The encoder_path is the path to the file of `encoder.onnx` and the path to the decoder file is the path to `decoder.onnx`. For your example, the ORTEncoder is initialized like this: ```py class ORTEncoder: """ Encoder model for ONNX Runtime inference. Arguments: session (`onnxruntime.InferenceSession`): The ONNX Runtime inference session associated to the encoder. """ def __init__( self, session: onnxruntime.InferenceSession, device: torch.device, main_input_name: str = "input_ids" ): self.session = session self._device = device self.main_input_name = main_input_name self.input_names = {input_key.name: idx for idx, input_key in enumerate(self.session.get_inputs())} self.output_names = {output_key.name: idx for idx, output_key in enumerate(self.session.get_outputs())} ``` When I initialize the Onnx InferenceSessions as shown in the first code block of this message, I get the following error: `self.encoder = ORTEncoder(onnxruntime.InferenceSession(c.encoder_path, providers=["CPUExecutionProvider"]), device='cpu') File "C:\Users\FrCa\Miniconda3\envs\onnxfix\lib\site-packages\torch\nn\modules\module.py", line 1242, in __setattr__ raise TypeError("cannot assign '{}' as child module '{}' " TypeError: cannot assign '__main__.ORTEncoder' as child module 'encoder' (torch.nn.Module or None expected) python-BaseException` The `ORTEncoder` seems to expect a path to a Pytorch model for its session, which seems odd. I am currently passing the onnx converted encoder to `ORTEncoder`, but due to the error, I have also tried passing the equivalent `.pth` model Additionally, I also tried passing None (which doesn't make much sense, but it says it is a possibility). Both of them also give errors. **EDIT**: I did find that by not adding the superclass of `VisionEncoderDecoderModel`, the model can initialize both the ORTEncoder and ORTDecoder. However, this causes the code to break, because the model does need the `config` attribute to work with the example that is provided here. ```py class ORTModelForVision2Seq(): def __init__(self, *args, **kwargs): self._device = "cpu" self.encoder = ORTEncoder(onnxruntime.InferenceSession(c.encoder_path, providers=["CPUExecutionProvider"]), device='cpu') self.decoder = ORTDecoder(onnxruntime.InferenceSession(c.decoder_path, providers=["CPUExecutionProvider"]), device='cpu') ``` `model.config.decoder_start_token_id = 2 AttributeError: 'ORTModelForVision2Seq' object has no attribute 'config'` <|||||>Hi @Fritskee , apologies for the late reply. You need to inherit `ORTEncoder` and `ORTDecoder` from `torch.nn.Module` to avoid the issue.<|||||>Hi @umanniyaz, please open a new issue instead<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,810
closed
[Doctest] Add `configuration_nezha.py`
# What does this PR do? Add `configuration_nezha.py` to `utils/documentation_tests.txt` for doctest. Additionally, I updated its doctest format to make it consistent with BERT. Based on issue https://github.com/huggingface/transformers/issues/19487 @sgugger could you please take a look at it? Thanks =)
10-22-2022 04:18:33
10-22-2022 04:18:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,809
closed
[Doctest] Add `configuration_plbart.py`
# What does this PR do? Add `configuration_plbart.py` to `utils/documentation_tests.txt` for doctest. Additionally, I updated its doctest format to make it consistent with BERT. Based on issue https://github.com/huggingface/transformers/issues/19487 @sgugger could you please take a look at it? Thanks =)
10-22-2022 04:05:45
10-22-2022 04:05:45
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,808
closed
[Doctest] Add `configuration_poolformer.py`
# What does this PR do? Add `configuration_poolformer.py` to `utils/documentation_tests.txt` for doctest. Based on issue https://github.com/huggingface/transformers/issues/19487 @sgugger could you please take a look at it? Thanks =)
10-22-2022 03:58:07
10-22-2022 03:58:07
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,807
closed
[Doctest] Add `configuration_electra.py`
# What does this PR do? Add `configuration_electra.py` to `utils/documentation_tests.txt` for doctest. Based on issue https://github.com/huggingface/transformers/issues/19487 @sgugger could you please take a look at it? Thanks =)
10-22-2022 03:50:42
10-22-2022 03:50:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,806
closed
[DOCTEST] `configuration_layoutlm.py` , `configuration_layoutlmv2.py` ,` configuration_layoutlmv3.py`
Based on #19487
10-22-2022 03:06:37
10-22-2022 03:06:37
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19806). All of your documentation changes will be reflected on that endpoint.<|||||>cc @ydshieh
transformers
19,805
closed
[DOCTEST] Add `configuration_mbart.py` , `configuration_mctc.py`
Based on #19487
10-22-2022 02:42:04
10-22-2022 02:42:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @ydshieh
transformers
19,804
closed
Make Conv1D.bias optional
# What does this PR do? A simple change to make Conv1D.bias optional. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Conv1D is used in GPT models: @patrickvonplaten, @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-21-2022 23:00:04
10-21-2022 23:00:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger thanks for the comment. Then in the case that if I'd like to disable bias in GPT-2 models, should we directly change the GPT-2 model implementation to avoid custom Conv1D, and deprecate it?<|||||>You can change the modeling code as you'd like to suit your need. We won't upstream it since it's not in the official GPT-2 code though.<|||||>So the code change guides are summarized as follows: 1. Official model code (e.g., GPT-2) cannot be changed to deprecate the use of Conv1D. 2. Conv1D in transformers is legacy and only used in some existing models (e.g., GPT-2). IIUC, then I still feel this PR is required, as Conv1D cannot be deprecated and is still a "building block" of some models. Is there any strong restriction (e.g., code freeze) that these legacy code cannot be changed? Thanks.<|||||>You can open a PR to remove the Conv1D use in GPT-2 and, but we won't add a new argument to control the bias like this. You can copy the modeling code and adapt it to your needs if this is something you want yourself.<|||||>I see. Yeah that makes sense and is actually my intention. I don't plan to add an argument to existing GPT-2. I just want to make sure I don't need to change other places in transformers to disable the bias in GPT-2. Thanks.
transformers
19,803
closed
Fix bug in Wav2Vec2's GPU tests
# What does this PR do? Fixes bugs introduced in #18351. Bugs appeared only when running tests in GPU, as reported and explained in https://github.com/huggingface/transformers/pull/18351#issuecomment-1285251004. In summary, some tests introduced in the previous PR: - Failed when running on the same process - Failed to call `.cpu()` before calling `.numpy()` ## Who can review? @ydshieh, @patrickvonplaten
10-21-2022 21:50:01
10-21-2022 21:50:01
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @falcaopetri Thank you for the fix. I run the 3 tests against this PR, but for `test_wav2vec2_with_lm_invalid_pool` (bot PT/TF), I get the following error: ```bash > self.assertIn("Falling back to sequential decoding.", cl.out) E AssertionError: 'Falling back to sequential decoding.' not found in '' ``` i.e. `cl.out` is empty string here. Could you double check here, please? `test_wav2vec2_with_lm_pool` is fixed by this PR though! <|||||>Hi @ydshieh. This one is trickier. As I was trying to fix the previous issue, I ended up changing the execution path by changing ```diff -processor.batch_decode(logits.numpy()) +processor.batch_decode(logits.cpu().numpy(), pool) ``` (see https://github.com/huggingface/transformers/pull/19803/commits/ca9f3f66d3e9cf786e9db86a1a710bff7057ab2b#diff-1063ef75ba73fe97fec48faf71f5020152ca85811784caaef74d4ca18fc6049fL1674-L1677). I was aiming to test this line: https://github.com/huggingface/transformers/blob/d4eb52d13d7af8be06ccb7723e1991a6f8ed8f59/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L399-L402 But that's protected by a https://github.com/huggingface/transformers/blob/d4eb52d13d7af8be06ccb7723e1991a6f8ed8f59/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L391 As I see, we could: 1. Don't test this execution path 2. Add back the `multiprocessing.set_start_method("spawn")` approach. - Pros: Previous bug happened because I was calling `set_start_method` twice, but we could re-implement it using a single call - Cons: This affects the whole runtime process, and since all GPU's tests are running in the same process, we could potentially break other tests (I didn't find any potential such tests in the code base, but we could have them in the future) --- One thing that caught my attention is that I'd expect the PR's CI/CD tests to fail. Looking the logs of `tests_torch*` I found: ``` tests_torch: ================ 130 passed, 218 skipped, 46 warnings in 32.78s ================ tests_torch_and_tf: ============================== 4 passed in 21.11s ============================== ``` Is `tests_torch` expected to skip all these tests? I also noticed that `tests_tf` uses `pytest -rA` flag, which makes it easier to debug which tests were skipped. `tests_torch` does not.<|||||>Hi @falcaopetri - what if we don't pass `pool` here https://github.com/huggingface/transformers/blob/ca9f3f66d3e9cf786e9db86a1a710bff7057ab2b/tests/models/wav2vec2/test_modeling_flax_wav2vec2.py#L612 would it work and test the target execution path we want? - Regarding `calling set_start_method twice`, do you mean one in PyTorch test and another one in TensorFlow test?<|||||>Regarding tests being skipped in PR CI, that is expected, as the relevant tests here are implemented in `Wav2Vec2ModelIntegrationTest` class, which is decorated with `@slow`, so those tests are run only after a PR being merged into `main`.<|||||>> * what if we don't pass `pool` here > would it work and test the target execution path we want? The target execution path is meant to alert non-unix users about the current limitations on these platforms when they use `batch_decode+don't specify a pool`. So yes, we shouldn't pass `pool` here. https://github.com/huggingface/transformers/blob/d4eb52d13d7af8be06ccb7723e1991a6f8ed8f59/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py#L391-L403 > * Regarding `calling set_start_method twice`, do you mean one in PyTorch test and another one in TensorFlow test? I'm not sure if I understood your question, but yes, `set_start_method` was being called on every test that required us to "simulate" we were on a non-unix platform (where `spawn` is default), i.e., it was used in `test_wav2vec2_with_lm_invalid_pool` from PT, TF and Flax tests. `set_start_method` should be called only once within a process though. ---- A little bit of context: - `pyctcdecode` does not currently work well with `spawn` contexts: if a `spawn` pool is passed, it will print a warning message and ignore the pool (see https://github.com/kensho-technologies/pyctcdecode/commit/a477d796e232b476ee8b877efba98aa2d822232e) - The change I've implemented is unrelated to this: `batch_decode` just needed to allow passing a `pool`, since default behavior is to create a new pool for every call (i.e., there was a huge overhead when processing multiple audios) - Since I was changing the code, I added a "safeguard" to the default behavior: "if no pool is specified, we create a pool only if we are on unix". The target execution path of the failing test is **"no pool specified and not on unix"**: instead of starting a spawn `Pool` that we know will be ignored by `pyctcdecode`, **we skip its creation and warn users.** So, technically, we could remove this "safeguard", instantiate a `Pool` independently whether it will or won't be used by `pyctcdecode`, and let it handle the user warning if necessary. <|||||>As you mentioned, the failing test mentioned in my comment is because the targe path is not executed (so we don't get the desired warning), and the reason is the `pool` is passed, my suggestion is to remove `pool` in https://github.com/huggingface/transformers/blob/ca9f3f66d3e9cf786e9db86a1a710bff7057ab2b/tests/models/wav2vec2/test_modeling_wav2vec2.py#L1678 and see if the test will pass (i.e. if the remaining part will do whatever they are expected to do). If this still fails (and the fix would involve more things), we can probably try to use a method recently intorduced https://github.com/huggingface/transformers/blob/371337a95b5d82cc9376c2595ed2022a5eb2ee6e/src/transformers/testing_utils.py#L1678 (which could avoid the problem of `calling set_start_method twice`). But we can work on this on our side :-), and I am fine to merge this PR as it is. Thank you for all the explanation :-) really appreciated, @falcaopetri ! <|||||>`transcription = processor.batch_decode(logits.cpu().numpy()).text` would fail in Unix envs (e.g., during CI/CD). This happens because `fork` is the default multiprocessing context, hence `set_start_method('spark')` is required. `run_test_in_subprocess` is really helpful in this case. I took the liberty to try it out myself (see new commit). I got TF, PT and Flax working correctly. Tested in both CPU and GPU, by running in Colab the following `RUN_SLOW=yes pytest -k test_wav2vec2_with_lm_invalid_pool tests/models/wav2vec2/test_modeling*_wav2vec2.py`. ---- As a side note, I had to force `"torchaudio<0.12"` on my local env and Colab, otherwise the following failed: ```python >>> from datasets import load_dataset >>> ds = load_dataset("common_voice", "es", split="test", streaming=True) >>> sample = next(iter(ds)) ... File "/opt/miniconda/base/lib/python3.8/site-packages/datasets/features/audio.py", line 273, in _decode_mp3 array, sampling_rate = torchaudio.load(path_or_file, format="mp3") File "/opt/miniconda/base/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py", line 214, in load return _fallback_load_fileobj(filepath, frame_offset, num_frames, normalize, channels_first, format) File "/opt/miniconda/base/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py", line 33, in _fail_load_fileobj raise RuntimeError(f"Failed to load audio from {fileobj}") RuntimeError: Failed to load audio from <_io.BytesIO object at 0x130509bd0> ``` I installed transformers with `[torch-speech,flax,torch,tf]` extras.<|||||>Also cc @sanchit-gandhi here<|||||>For @sgugger to take a quick look before I can merge <|||||>Thanks for all the help @ydshieh! I'm really happy to help improving `transformers` (and to fix my failing tests 😅)! > Still think this should be in a decorator @sgugger you mean `run_test_in_subprocess` API right? Given its signature (`run_test_in_subprocess(test_case, target_func, inputs=None, timeout=600)`) I was ready to just use it as a decorator in my tests. Then I saw its usage in `Whisper`'s and realized the pickling concerns. IMHO it would have a much cleaner API if feasible though.<|||||>Thanks for the fix @falcaopetri! The new testing logic LGTM!<|||||>@falcaopetri Not sure how it's feasible exactly, but will definitely try something :-)<|||||>Sorry, I thought I merged this PR, but actually not. Thanks @sgugger
transformers
19,802
closed
Run Vit-MAE script key error
### System Info - `transformers` version: 4.23.1 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): 2.9.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @NielsRogge @sg ### Information - [x] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I've put down the main parts of the run MAE script [https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-pretraining/run_mae.py] into a colab notebook: [https://colab.research.google.com/drive/1WtOTp-ocbBTgVXFiEXuY8MWhz_ex9OBy?usp=sharing] When I reach the trainer.train() step I get a key error from the data collator function - is the script perhaps outdated, or am I doing something wrong? ### Expected behavior Trainer starts training
10-21-2022 21:24:11
10-21-2022 21:24:11
You'll need to add `remove_unused_columns=False` to the `TrainingArguments`. The reason is because of the use of `set_transform` when preparing the datasets, which does things on-the-fly. Hence we still need to `image` column in the datasets to turn them into `pixel_values`.<|||||>Amazing thank you!
transformers
19,801
closed
Generate: minor docstring fix
# What does this PR do? Fixes this: <img width="816" alt="Screenshot 2022-10-21 at 22 12 40" src="https://user-images.githubusercontent.com/12240844/197289701-0092c00a-2aec-4c72-96ba-fb2df4ce14b1.png">
10-21-2022 21:13:29
10-21-2022 21:13:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>Fixed <img width="834" alt="Screenshot 2022-10-21 at 22 28 30" src="https://user-images.githubusercontent.com/12240844/197291473-5c5c8984-d553-4e7c-85a3-94a254e452f8.png">
transformers
19,800
closed
Added translation of run_scripts.mdx to Portuguese Issue #16824
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Related to #16824 Currently, only the run_scripts.mdx file was translated as of this PR. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-21-2022 20:10:36
10-21-2022 20:10:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,799
closed
Refactor conversion function
# What does this PR do? This PR introduces a refactor of the conversion functions to be able to re-use with safetensors. It comes with zero change of code inside but the main takeaway is to build two functions that take a PyTorch-formatted (resp. TF-formatted) state dict and load it in a TF (resp. PyTorch) model. The state dict is just a dictionary name to tensor, and the tensor car be a NumPy array (as it was) or a torch/tf tensor (which it will be with safetensors).
10-21-2022 19:11:20
10-21-2022 19:11:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,798
closed
Fix error/typo in docstring of TokenClassificationPipeline
Fixes #19797 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
10-21-2022 16:25:11
10-21-2022 16:25:11
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,797
closed
Small typo in documentation of TokenClassificationPipeline
### System Info https://github.com/huggingface/transformers/blob/v4.23.1/src/transformers/pipelines/token_classification.py#L176 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Open the documentation https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.TokenClassificationPipeline.__call__ > word (str) — The token/word classified. This is obtained by decoding the selected tokens. If you want to have the exact string in the original sentence, use start and stop. ### Expected behavior I'm quite sure it should be "`start` and `end`"
10-21-2022 16:24:26
10-21-2022 16:24:26
transformers
19,796
closed
Add Image Processors
# What does this PR do? Adds image processors for most vision models in the transfomers library. CLIP for first review. Once it's had approval, I'll merge the other processors into this branch, ask for a final review and then merge if all good. Other models with more complex processing logic e.g. DETR will have subsequent PRs. **🚨🚨🚨 `size` parameter 🚨🚨🚨** The most important change here is how `size` is recorded in the configurations and passed around in the processing logic. * Some feature extractors' had `size` recorded as a tuple in `(width, height)` format, and others in `(height, width)` format. * To remove ambiguity, any new configurations will have `size` as a dictionary - `{"height": h, "width"}` - `{"shortest_edge": s}`: some feature extractors `size` indicates the length the shortest should be resized to - `{"shortest_edge": s, "longest_edge": l}`: same as above, but also places upper limit on the longest edge * `get_size_dict` is a helper function to keep backwards compatibility with old configs. It takes old size arguments and converts them to the equivalent dict. This is applied at `__init__`, `preprocess` and any relevant transforms e.g. `resize` where old size arguments can be passed. Other models: * [x] [BeiT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-beit) * [x] [ConvNeXT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-convnext) * [x] [DeiT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-deit) * [x] [DPT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-dpt) * [x] [Flava](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-flava) * [x] [ImageGPT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-imagegpt) * [x] [LayoutLM](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-layoutlm) * [x] [LeViT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-levit) * [x] [MobileViT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-mobilevit) * [x] [Perceiver](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-perceiver) * [x] [PoolFormer](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-poolformer) * [x] [SegFormer](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-segformer) * [x] [VideoMAE](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-videomae) * [x] [Vilt](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-vilt) * [x] [ViT](https://github.com/huggingface/transformers/compare/main...amyeroberts:transformers:add-image-processor-vit) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
10-21-2022 15:48:18
10-21-2022 15:48:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger @LysandreJik @alaradirik @NielsRogge LMK if you'd rather I did this as many, separate feature extractor PRs for each model type, or as proposed in the description: CLIP as a demonstration, merging the other PRs into this one and then a final PR. <|||||>> @sgugger @LysandreJik @alaradirik @NielsRogge LMK if you'd rather I did this as many, separate feature extractor PRs for each model type, or as proposed in the description: CLIP as a demonstration, merging the other PRs into this one and then a final PR. @amyeroberts nice work :) I'm fine with reviewing all image processors in this PR. <|||||>@sgugger @alaradirik @NielsRogge @LysandreJik - All of the other models image processors are now merged in and all tests passing. I've added a few comments highlighting some design decisions. <|||||>@amyeroberts I get a 'cannot import name' error when I try to import XXXImageProcessor classes like this: `from transformers import XXXImageProcessor` Just wanted to double check if everything can be imported without any issues on your side?<|||||>> @amyeroberts I get a 'cannot import name' error when I try to import XXXImageProcessor classes like this: from transformers import XXXImageProcessor Just wanted to double check if everything can be imported without any issues on your side? @alaradirik It's not possible (yet) to directly import the image processors. This PR makes an alias for the feature extractors, such that if you do `from transformers import XXXFeatureExtractor` it will import the equivalent image processor. Enabling these imports will come in a set of follow up PRs which will handling completely replacing the feature extractors and making the image processors the official objects to use. This will include: making image processors directly importable; adding the `AutoImageProcessor` class; updating & expanding image processor documentation; replacing feature_extractor with `image_processor` in examples. <|||||>@alaradirik @NielsRogge Can you give a final review and let me know if we're good to merge?
transformers
19,795
closed
Strange shape of Scores vector
### System Info transformers==4.20.1 torch==1.11.0+cu113 Python 3.9.13 ### Who can help? @patrickvonplaten @Narsil @ola13 @gante ### Information - [X] The official example scripts ### Reproduction Hello everyone, I am using BART and I have enabled `return_scores = True` with `beam_size = 4`. The shape of the Scores vector is `(seq_len, batch_size * beam_size * num_returned sequence, vocab_size)`. I would like to know how I can subdivide the vector obtaining the shape `(seq_len, batch_size, beam_size, num_returned_sequence, vocab_size)`. Because at the moment it is not possible to map the input sentences in a batch with the output. Kind regards, Andrea ps: I also opened a [blog post](https://discuss.huggingface.co/t/strange-shape-of-scores-vector/24780), I did not know which was the most appropriate place, sorry for the duplicate. ```python3 outputs = model.generate( input_ids=source_ids, max_length = 500, return_dict_in_generate=True, output_scores=True, num_beams=4 ) print(len(outputs.scores)) # seq_len print(outputs.scores[0].shape) # (batch_size * beam_size * num_returned sequence, vocab_size) ``` ### Expected behavior a score vector with a shape of (seq_len, batch_size, beam_size, num_returned_sequence, vocab_size)
10-21-2022 15:43:05
10-21-2022 15:43:05
Hi @andreabac3 👋 I apologize in advance. The documentation for that part of the codebase is poor at the moment, so it's completely understandable that you feel confused. The first documentation issue is the shape of `outputs.scores[0].shape`, which is actually `(batch_size * beam_size, vocab_size)`. It contains the scores (logits) of each token for each beam at each step. However, on their own, these scores are not very helpful. The most common use case is to use this tensor to obtain the scores of the selected tokens for each output -- we have an undocumented function for that (second documentation issue, tracked in https://github.com/huggingface/transformers/issues/18616), [`compute_transition_beam_scores`](https://github.com/huggingface/transformers/blob/e0b825a8d03f50ed9dbf9fbbbb3b4fcf0b4e4b22/src/transformers/generation_utils.py#L876). You can call it as ```python model.compute_transition_beam_scores(outputs.sequences, outputs.scores, outputs.beam_indices) ``` and it returns a tensor of shape `(num_return_sequences, seq_len)`. For instance, the value as `[2, 6]` corresponds to score of the 7th token for the 3rd returned sequence. If you sum these scores across the `seq_len` axis and divide by the length of the sequence (and apply the appropriate `length_penalty` scaling, if needed), you will obtain back `output.sequences_scores`. I hope this helps! We will be revisiting the documentation soon to make all this clear. Let me know if you have further questions 🤗 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @gante 👋, sorry for the late answer. Don't worry about the missing documentation; the function worked perfectly. Thanks again for your support, Andrea
transformers
19,794
closed
Use None to detect if truncation was unset
# What does this PR do? This PR changes the default of the `truncation` argument to `None` so that we can detect the difference between: - truncation was not set - truncation was set to `False`. As pointed out in #19790, relying on `False` to test if the argument is unset yields unexpected behaviors. Fixes #19790
10-21-2022 14:46:01
10-21-2022 14:46:01
transformers
19,793
closed
Update doc for revision and token
# What does this PR do? This PR updates the docstrings of all `from_pretrained` methods to: - adapt the documentation for `use_auth_token` - remove the Tip about `use_auth_token=True` for private models - add a tip about checking out PRs using the revision argument (For now just did the changes on the config but will copy paste everywhere once I have had opinions :-) )
10-21-2022 14:25:37
10-21-2022 14:25:37
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,792
closed
Fix nightly test setup
# What does this PR do? The nightly tests were not properly triggered because I made a typo in the variable name to set 🤦‍♂️ . This is fixed since yesterday., but now there is a failure in the setup (see [here](https://app.circleci.com/jobs/github/huggingface/transformers/597479?utm_campaign=workflow-failed&utm_medium=email&utm_source=notification)) because I forgot to checkout the repo.
10-21-2022 14:09:19
10-21-2022 14:09:19
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,791
closed
Remove undefined pytorch_model
# What does this PR do? The docs recommend running `del pytorch_model` to free memory, but `pytorch_model` has never been defined. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-21-2022 13:40:30
10-21-2022 13:40:30
transformers
19,790
closed
`truncation='do_not_truncate'` is not equivalent to `truncation=False`
### System Info - `transformers` version: 4.21.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.7 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.9.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @SaulLu @patrickvonplaten ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("facebook/bart-base") sent = 'The quick brown fox jumps over the lazy dog' len(tokenizer.encode(sent, max_length=5, truncation='do_not_truncate')) ``` prints: `11` BUT: ```python len(tokenizer.encode(sent, max_length=5, truncation=False)) ``` prints: `5` ### Expected behavior Hi, I would expect that `truncation='do_not_truncate'` would be always equivalent to `truncation=False`. This manual: https://huggingface.co/docs/transformers/pad_truncation and this doc https://huggingface.co/docs/transformers/main_classes/tokenizer say that: >`False` or `'do_not_truncate'`: no truncation is applied. This is the default behavior. Which means that they are supposed to be equivalent (regardless of what they do, they should behave the same). However, when using `truncation=False` and providing any value for `max_length`, it defaults to `'longest_first'` truncation strategy. Whether this default behavior is natural or not, isn't `False` supposed to be identical to `'do_not_truncate'`? This leads to a situation when the user explicitly specifies `truncation=False` but the text **is tokenized**. I suggest that `truncation=False` should always mean "no truncation", no matter what, regardless of `max_length` was supplied or not. I think that this is the expected behavior by any user: explicitly specifying `truncation=False` should mean no truncation, regardless of other parameters. Thanks, Uri
10-21-2022 12:43:36
10-21-2022 12:43:36
It looks like the PR that set the current truncation/passing arguments has some behavior for backward compatibility that is triggered when `truncation` is unset. However, instead of having `truncation=None` as default (to make sure to detect when it's unset), it uses `truncation=False` as default. So in this instance, even if you passed along `truncation=False`, it activates those tests for backward compatibility. It is way safer to use `truncation="do_not_truncate"` to avoid this. I'll investigate if we can fix this without breaking anything (by changing to `truncation=None` as default for unset beahvior). <|||||>Thanks! Yeah I know that it's safer to use `truncation="do_not_truncate"`, but passing booleans is kind of safer than strings in general (and it's shorter), so (since the docs allow it) I passed `truncation=False` and I found it very unpredictable that truncation was applied, even when I explicitly passed it.<|||||>Awesome, thanks!
transformers
19,789
closed
add greek translation to index
# What does this PR do? Fixes #19788 ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
10-21-2022 11:48:26
10-21-2022 11:48:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19789). All of your documentation changes will be reflected on that endpoint.<|||||>Yes, Greek under ISO 639-1 would be `el`, which is what we use. `gre` corresponds to 639-2. (source: https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)<|||||>Hello! I am really sorry for the mistake I will fix it shortly!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry missed the change! It looks like there is a problem with CircleCI (tests are not run). Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? Could you also add the Greek language code [here](https://github.com/huggingface/transformers/blob/11f3ec7224c83c9e5c379a774b9d3984e68d26fa/.github/workflows/build_documentation.yml#L18) and [there](https://github.com/huggingface/transformers/blob/11f3ec7224c83c9e5c379a774b9d3984e68d26fa/.github/workflows/build_pr_documentation.yml#L17) so that the doc is built in Greek? let us know if you need any help, thanks!
transformers
19,788
open
Add translation of docs to greek
I thought it would be awsome to add greek translation to the documentation I currently have translated only the index.mdx
10-21-2022 11:43:47
10-21-2022 11:43:47
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,787
closed
Generate: contrastive search test updates
# What does this PR do? The newly introduced tests had a bunch of minor issues, including models too big for CI, formatting problems, or slightly incorrect strings (the hardcoded strings were generated for inputs that were changed before the final commit). This PR addresses these issues. All new (slow) tests passing locally now.
10-21-2022 11:41:31
10-21-2022 11:41:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,786
closed
Fix CTRL `test_torchscrip_xxx` CI by updating `_create_and_check_torchscript`
# What does this PR do? Fix CTRL `test_torchscrip_xxx` CI by updating `_create_and_check_torchscript`. Before calling `torch.jit.trace`, we run the prepared inputs first. ### More context The PR #19681 puts `pos_encoding` attribute to the correct device for CTRL model, but this could be only done safely in the `forward` method. However, our current `torchscript` tests don't run the model with the prepared inputs before calling `torch.jit.trace`. And so we get the following error for `CTRL` after PR #19678: ```bash (line 535) torch.jit._trace.TracingCheckError: Tracing failed sanity checks! ... ... Comparison exception: The values for attribute 'device' do not match: cpu != cuda:0. ``` See [CI report](https://github.com/huggingface/transformers/actions/runs/3286486761/jobs/5414682626)
10-21-2022 11:20:02
10-21-2022 11:20:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>Running all tests (per model separately, as in our CI) of `test_torchscript_xxx`, all pass
transformers
19,785
closed
Update `ImageToTextPipelineTests.test_small_model_tf`
# What does this PR do? After PR #19732, I uploaded the correctly converted TF model to the Hub repo [hf-internal-testing/tiny-random-vit-gpt2](https://huggingface.co/hf-internal-testing/tiny-random-vit-gpt2/tree/main) This PR updates the expected values accordingly, which is the same values as for `test_small_model_pt`.
10-21-2022 11:08:02
10-21-2022 11:08:02
Thank you so much for this @ydshieh !!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks!
transformers
19,784
closed
Add Swin2SR
# What does this PR do? Fixes #19568 and replaces #19667 This PR adds Swin2SR, a Swinv2-based model for image super resolution, compression and restoration. To do: - [x] finish `Swin2SRImageProcessor` - should incorporate padding - [x] fix integration test - [x] transfer checkpoints to the appropriate organization
10-21-2022 08:47:48
10-21-2022 08:47:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Gently pinging @sgugger here
transformers
19,783
closed
Add XCiT Model
### Model description Cross-covariance image transformer (XCiT) is built upon XCA. It combines the accuracy of conventional transformers with the scalability of convolutional architectures. [Paper](https://arxiv.org/abs/2106.09681) ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Official Implementation: https://github.com/facebookresearch/xcit Timm Implementation: https://github.com/rwightman/pytorch-image-models/blob/main/timm/models/xcit.py
10-21-2022 08:45:42
10-21-2022 08:45:42
transformers
19,782
closed
Add Flan-T5 Checkpoints
### Model description Flan-T5 models are instruction-finetuned from the T5 v1.1 LM-adapted checkpoints. They can be directly used for few-shot prompting as well as standard fine-tuning. Here is the [paper](https://arxiv.org/abs/2210.11416). ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation * Paper: https://arxiv.org/abs/2210.11416 * Model implementation: same as T5 v1.1 * Model weights: https://github.com/google-research/t5x/blob/main/docs/models.md#flan-t5-checkpoints
10-21-2022 02:33:38
10-21-2022 02:33:38
Cool! cc @patrickvonplaten <|||||>FYI the conversion script can be found here: https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/convert_t5x_checkpoint_to_flax.py<|||||>cc @younesbelkada @ArthurZucker one of you interested in taking it? :-) <|||||>Yes! On it 🤗 <|||||>They are available now: https://huggingface.co/models?search=flan-t5<|||||>@NielsRogge: Quick question: when I try to load any of the FLAN models: `model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small")` I receive the following error: `AttributeError: 'T5LayerFF' object has no attribute 'config'` I have transformers==4.18.0.<|||||>Hey @BalazsFeherUK feel free to open a new issue with the full trace of the error and ping me 🤗
transformers
19,781
closed
`"transformers_version"` is not enforced
### System Info - `transformers` version: 4.21.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.7 - Huggingface_hub version: 0.8.1 - PyTorch version (GPU?): 1.9.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes+no - Using distributed or parallel set-up in script?: no ### Who can help? This is a general problem with loading pretrained models in the library, not any specific model: @sgugger @stevhliu @patrickvonplaten ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("NinedayWang/PolyCoder-2.7B") model = AutoModelForCausalLM.from_pretrained("NinedayWang/PolyCoder-2.7B") ``` ### Expected behavior The model's `config.json` specifies that this model requires version `4.23.1` here: https://huggingface.co/NinedayWang/PolyCoder-2.7B/blob/main/config.json#L21 but if the user is trying to load this model with an older version (`4.21.1`), it still incorrectly and silently does load. So, I would expect an error instead of a successful loading, because with an old version of `transformers`, this model produces incorrect predictions. I prefer not being able to load the checkpoint if I don't have the required version of the library. In other words, why do we have the `"transformers_version"` field in the `config.json`, if it is not enforced? Thanks!
10-21-2022 01:56:46
10-21-2022 01:56:46
It is just informative (and saved automatically, not selected by the user), so it is by no way a minimal version required for the model.<|||||>thanks! Don't we want a way to enforce that a certain checkpoint will be loaded only by a specific version? Currently users may download checkpoints, load them successfully, but just get bad predictions<|||||>DO you have an explicit example of that? We maintain very strict backward/forward compatibility in the model code.<|||||>Yes, This model `NinedayWang/PolyCoder-2.7B` must be loaded with version `>=4.23.0`, because it depends on this PR: https://github.com/huggingface/transformers/pull/18695 <|||||>Ah, yet another argument for not adding config arguments like those. Thanks for pointing that out.<|||||>Thanks a lot!
transformers
19,780
closed
Issue when exporting SwinForImageClassification to ONNX format
### System Info ### 1. Libraries version transformers == 4.23.1 / torch == 1.12.1 / onnx = 1.12.0 ### 2. Context I trained an Image Classifier using `SwinForImageClassification` with a custom number of labels. I want to put it in production using the ONNX format. So I need to export my model in this format. ### Error I used the `python -m transformers.onnx [...]` as recommended in [your documentation](https://huggingface.co/docs/transformers/serialization#exporting-a-model-to-onnx). Without considering some warning (available hereabove) during the ONNX creation, the model's creation works. However the test that compares values (`validate_model_outputs`) at the end fails: ``` $ python -m transformers.onnx --model='test_onnx/swin_classif/sources' test_onnx/swin_classif/ --feature='image-classification' --preprocessor=feature_extractor [...] Validating ONNX model... -[✓] ONNX model output names match reference model ({'logits'}) - Validating ONNX Model output "logits": -[✓] (3, 160) matches (3, 160) -[x] values not close enough (atol: 0.0001) [...] ``` I tried to use `torch.onnx.export` and `transformers.onnx.export`. The same behaviour happened. **BUT when I tried with the model ViT (`ViTForImageClassification`), everything worked well !!** ### 3. How to reproduce the issue _In the dedicated section._ I don't know if I miss something obvious here ? Thank you for your work 🤗 and I hope we can solve this ! 🚀 ### 4. Warning during ONNX creation here is the different warning I have when creating the ONNX model: (Each happen multiple times for different operation) ``` - TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! - UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). - WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. ``` ### Who can help? @NielsRogge, @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Here is the code to reproduce the issue: **step 1: Save pretrained** ``` from transformers import SwinForImageClassification, AutoFeatureExtractor pretrained_path = "microsoft/swin-tiny-patch4-window7-224" swin_classif = SwinForImageClassification.from_pretrained(pretrained_path, ignore_mismatched_sizes=True, num_labels=160) feat_extract = AutoFeatureExtractor.from_pretrained(pretrained_path) swin_classif.save_pretrained('test_onnx/swin_classif/sources') feat_extract.save_pretrained('test_onnx/swin_classif/sources') ``` **step 2: Build ONNX** `python -m transformers.onnx --model='test_onnx/swin_classif/sources' test_onnx/swin_classif/ --feature='image-classification' --preprocessor=feature_extractor` ### Expected behavior I would expect the ONNX creation for `SwinForImageClassification` to work: given the same input, SwinForImageClassification model produces the same output as its exported version in ONNX format.
10-21-2022 01:34:31
10-21-2022 01:34:31
You can try a slightly less strict absolute difference, as explained here: https://github.com/huggingface/transformers/issues/15716#issuecomment-1044630255<|||||>Hello @NielsRogge, thank your for this quick response ! I was not clear, the test is not failing for a bad reason: the outputs are actually significantly different: here is the full error message: ``` Validating ONNX model... -[✓] ONNX model output names match reference model ({'logits'}) - Validating ONNX Model output "logits": -[✓] (3, 160) matches (3, 160) -[x] values not close enough (atol: 0.0001) Traceback (most recent call last): File "/Users/bleguay/.pyenv/versions/3.10.0/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/bleguay/.pyenv/versions/3.10.0/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/Users/bleguay/.pyenv/versions/piraeus-model/lib/python3.10/site-packages/transformers/onnx/__main__.py", line 180, in <module> main() File "/Users/bleguay/.pyenv/versions/piraeus-model/lib/python3.10/site-packages/transformers/onnx/__main__.py", line 173, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/Users/bleguay/.pyenv/versions/piraeus-model/lib/python3.10/site-packages/transformers/onnx/convert.py", line 455, in validate_model_outputs raise ValueError( ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 1.3692436218261719 for [-0.06106845 0.12285465 -0.18921788 0.5049372 -0.1986409 0.34636068 -0.1691071 -0.02217978 -0.6896293 -0.21974993 -0.76842034 0.00995809 -0.11905045 -0.09523626 -0.17755091 -0.04897689 0.39708626 0.05108643 0.06990042 -0.04949172 -0.15309443 -0.8103958 0.3775105 -0.45507845 -0.05611497 0.05218322 -0.71096545 -0.38933533 0.06547274 -0.3132524 0.26966637 0.02131431 -0.07573311 0.19405767 -0.16967274 -0.5358653 0.83661413 -0.23051989 -0.33136314 0.28437945 0.23672453 0.1740103 -0.30914858 -0.33530912 0.20119892 0.402417 -0.01951164 -0.5698619 -0.35475633 -0.1181808 -0.4235557 -0.34506178 -0.12214591 -0.49559855 -0.50011915 0.71717185 -0.23441991 0.3804775 -0.15777466 0.19934995 -0.2813175 -0.16817626 0.1785079 -0.48286003 -0.1709578 0.2557745 -0.01732143 -0.24144898 -0.15041552 -0.49435216 -0.10977422 -0.23015508 0.40337044 0.13203782 0.45959228 -0.45770365 0.29646975 -0.02936734 0.06572083 -0.14521742 -0.196341 0.4045911 0.10471516 0.23922467 0.50704205 0.08659008 -0.02236488 0.3054718 0.30713248 -0.09301025 -0.0402 0.11342324 -0.28109962 0.27469164 -0.10700954 -0.06830485 0.4378688 0.15085813 -0.45130914 0.16799057 0.33943364 0.6166247 -0.37087205 -0.32451615 0.13080192 -0.2465924 0.21146286 0.46603474 -0.3899023 0.5917809 0.18371034 -0.4288618 -0.5969511 -0.11656356 0.5030414 0.29792047 -0.33490264 -0.23519978 0.4783889 -0.17767602 -0.19774103 -0.14759333 0.20283869 -0.33203778 -0.15488361 0.28006512 0.01535628 0.6650907 0.31069627 0.1286084 0.49600935 0.34268755 -0.04938671 -0.57014364 0.72736365 -0.10143891 -0.6347118 0.07822949 -0.30062795 -0.54248583 -0.4074985 -0.3044052 0.18507619 -0.36234486 0.46773177 -0.11254795 0.27562687 -0.10287824 0.1802718 0.01491091 -0.41343385 0.17572625 -0.20250738 -0.43644306 0.41057307 0.1371599 0.6772319 -0.49024868 -0.02152741 -0.3506258 -0.18412054 0.12870055 -0.19661918 0.46798164 0.04501206 0.22052541 -0.17762536 0.06754559 -0.6358628 -0.29670113 -0.73341167 0.09625536 -0.10944731 -0.14755067 -0.14341569 0.11812273 0.29438084 0.02348799 0.034365 -0.03890094 -0.32477033 -0.640104 0.30603072 -0.43583125 -0.01389889 0.05416113 -0.679801 -0.45222676 -0.0023165 -0.41687882 0.22143522 -0.2897661 -0.05619043 0.06813143 -0.09530576 -0.5496853 0.7789349 -0.14078154 -0.35014948 0.2892362 0.18976417 0.05590723 -0.3260829 -0.40019906 0.21867548 0.3817903 -0.00801944 -0.54762053 -0.3570432 -0.12390244 -0.5746232 -0.32545304 -0.08603653 -0.58821917 -0.5115222 0.7171049 -0.2543296 0.43015337 -0.06181256 0.11959662 -0.12665194 -0.17386276 0.18951124 -0.29464567 -0.32482645 0.16736132 -0.13734964 -0.13887791 -0.21936226 -0.5363412 0.00894417 -0.4345506 0.3795787 0.15755552 0.40534687 -0.4798122 0.48428625 -0.11364223 0.12477627 -0.06477118 0.01065597 0.27613276 0.20831317 0.26337263 0.56675375 0.20128776 -0.07043616 0.34169155 0.2943419 -0.11713834 -0.07966733 0.0499542 -0.348781 0.21979605 -0.28619874 -0.09100726 0.37456948 0.13946635 -0.33606255 0.15814117 0.34456918 0.66328716 -0.34123376 -0.3316517 -0.07244727 -0.17735815 0.19246355 0.6528053 -0.42591572 0.54325604 0.21217923 -0.44543248 -0.5038978 -0.1207954 0.5446789 0.12648779 -0.3198638 -0.30068278 0.5454049 -0.2828171 -0.19203103 -0.26519054 0.22469468 -0.42633253 -0.12229709 0.24556151 0.01124579 0.64908105 0.3516025 0.28168085 0.40130708 0.14279017 -0.11640775 -0.41997045 0.61343706 -0.01251362 -0.6319312 0.03432515 -0.32933724 -0.5734776 -0.41919896 -0.19220461 0.19410697 -0.36557207 0.28580746 -0.15460443 0.36335593 -0.08719397 0.26283717 0.03314497 -0.3651286 0.10970266 -0.25979558 -0.3208456 0.3499791 0.15752569 0.6617712 -0.4127711 0.02931512 -0.35706407 -0.2145712 0.02474257 -0.17494626 0.60114133 -0.06213076 0.1068871 -0.04108006 0.07552788 -0.6358697 -0.40869412 -0.68531764 0.0760276 -0.12475098 -0.1669592 -0.16771694 0.14568909 0.3728184 0.2728331 -0.11506456 0.02971141 -0.10832652 -0.78700745 0.11165941 -0.42647707 0.13101816 0.01028017 -0.68079805 -0.5744518 0.0489867 -0.21155152 0.2604037 -0.09611046 -0.02375504 0.22795928 -0.1619378 -0.5631519 0.8423456 -0.00467771 -0.13729835 0.23964095 0.14606774 0.12160218 -0.06164521 -0.5445323 0.33276808 0.49160522 0.01048732 -0.6667893 -0.28605506 -0.31852537 -0.40799573 -0.4934004 -0.25376865 -0.39825052 -0.6462982 0.8023047 -0.13846312 0.6053733 -0.00777063 0.2746262 -0.21927634 -0.14655772 0.3084348 -0.35635483 -0.22394183 0.13854408 0.01103184 -0.17685157 -0.16520068 -0.5759028 0.21486142 -0.38122138 0.21948887 0.1675158 0.37934083 -0.42796057 0.21204138 -0.05589592 0.13516527 -0.00360115 -0.00644675 0.26455837 0.1864259 0.019116 0.74247843 0.04402375 -0.12179609 0.15354294 0.3543662 -0.11259714 0.12143008 -0.07637051 -0.34425476 0.38279337 -0.11939427 -0.14530821 0.5397643 0.21652104 -0.20089209 0.39749998 0.45645627 0.47782743 -0.3020329 -0.15384908 0.21358004 -0.18617995 0.31016254 0.54617953 -0.5179592 0.45387554 0.3203557 -0.32384175 -0.46208858 -0.2512101 0.5824921 0.13412705 -0.215639 -0.4263451 0.4956808 -0.29090098 -0.15356363 -0.06714268 0.3812586 -0.49488905 -0.17332229 0.55745924 -0.15221891 0.83650666 0.36271864 -0.00793004 0.4636365 0.46242952 -0.27560318 -0.3318567 0.6762064 -0.12277468 -0.5726782 0.3396322 -0.56470525 -0.5560886 -0.3274095 -0.2678147 0.06875373 -0.25510776 0.24946427 -0.04763301 0.27787483 -0.14732757 0.27278543 -0.17570269 -0.3139099 0.10965091 -0.28918004 -0.16069773 0.3428337 0.13698481 0.8861261 -0.23068875 0.0391845 -0.18411607] vs [ 1.28135830e-01 1.70594186e-01 6.14838600e-01 5.49140811e-01 4.69820261e-01 1.11730479e-01 -3.14225554e-01 7.81431854e-01 -7.78754950e-01 -6.38007522e-02 2.28893846e-01 -6.26583874e-01 -2.50711739e-01 -4.85903800e-01 -3.10093999e-01 6.24564886e-01 -5.28742313e-01 5.05275965e-01 -4.65409130e-01 6.56028330e-01 -2.03518152e-01 -1.51091576e-01 3.00208390e-01 -2.97761351e-01 -1.71665862e-01 -5.28911278e-02 2.03803942e-01 1.40974492e-01 -1.21421896e-01 8.24191928e-01 3.42895120e-01 7.86975741e-01 -5.23516983e-02 -3.52419317e-01 3.02186906e-01 2.88510144e-01 3.51488084e-01 7.02708364e-02 -1.15866810e-01 -2.68891037e-01 8.08803618e-01 -5.92842326e-02 6.99101463e-02 8.97745937e-02 1.39535978e-01 -3.86431873e-01 6.02225512e-02 4.87125576e-01 6.85935169e-02 -4.39252973e-01 -1.72007129e-01 5.41739464e-01 1.04485273e-01 2.95256585e-01 -2.49259204e-01 8.35965753e-01 -3.85174900e-01 1.60440445e-01 5.23748919e-02 -2.11794093e-01 -3.98029804e-01 -3.30731153e-01 -4.34303999e-01 -2.86957264e-01 2.27841839e-01 -4.76897717e-01 9.08873975e-02 -2.30960101e-02 -1.24389812e-01 -3.04996610e-01 -7.26581812e-02 8.28876942e-02 -3.23720336e-01 1.00407183e-01 7.57869110e-02 -4.93685246e-01 1.62461221e-01 3.72860581e-02 1.72318250e-01 4.84319180e-01 3.54439080e-01 2.43036404e-01 4.60274130e-01 -1.69880912e-01 -9.97056365e-02 7.78566450e-02 -4.69432414e-01 1.40246347e-01 1.33542120e-01 -7.03689381e-02 -1.00386068e-01 7.72113204e-02 -1.47819608e-01 -5.82554340e-02 -2.82135569e-02 2.54881173e-01 -1.36251241e-01 8.71378422e-01 -4.29740548e-01 -1.30290717e-01 -1.91676959e-01 -7.64316134e-03 1.98935851e-01 -5.97431183e-01 -2.06617534e-01 -3.22597101e-02 -2.79274642e-01 7.76519418e-01 -9.51622844e-01 -2.75189161e-01 6.74007177e-01 9.40381885e-01 -9.32118535e-01 -3.05536866e-01 7.94740081e-01 2.92763114e-03 3.38176519e-01 3.78656387e-01 3.12738180e-01 9.91565406e-01 1.87288716e-01 2.43043900e-02 4.11912024e-01 5.47892213e-01 2.22662203e-02 -6.54011011e-01 5.97522743e-02 -1.50490522e-01 -5.57740033e-03 -1.17180705e+00 -5.18481553e-01 2.62342155e-01 6.85439408e-02 4.19662029e-01 -3.49796116e-02 -2.48095900e-01 8.43257308e-02 6.76371306e-02 -1.49092495e-01 3.83319199e-01 2.13684678e-01 -6.26890957e-01 -2.54286230e-01 -3.43747616e-01 7.39749193e-01 -9.40448791e-02 3.17870259e-01 2.62159199e-01 3.28509994e-02 -2.41731316e-01 -1.90109000e-01 2.16163784e-01 -9.76384804e-02 -1.72702849e-01 7.67504573e-01 -9.20159459e-01 6.19455040e-01 -5.25325775e-01 -6.09292686e-02 -8.52263391e-01 3.49729359e-02 2.97759563e-01 4.74937022e-01 4.12648469e-01 3.74721527e-01 1.21058494e-01 -2.41388261e-01 5.22115350e-01 -6.32577777e-01 -2.46033400e-01 1.91389874e-01 -5.35460591e-01 -1.25709504e-01 -2.99965620e-01 -3.21753263e-01 6.38255298e-01 -4.51283753e-01 3.47820938e-01 -3.67054909e-01 7.22098470e-01 -7.49995261e-02 -2.69949436e-04 1.94670975e-01 -3.32899958e-01 -1.89728126e-01 -8.35971460e-02 1.95707992e-01 3.98901105e-02 -1.33678481e-01 7.17000544e-01 2.51645386e-01 6.28485441e-01 -4.08213362e-02 -2.41552234e-01 2.71573693e-01 1.91486299e-01 2.80226946e-01 1.29383892e-01 -7.03004003e-03 -2.07323164e-01 7.42809296e-01 6.88267723e-02 1.41356453e-01 8.76589119e-03 -7.38414377e-02 -2.32301414e-01 -3.41140553e-02 3.97274435e-01 1.06192134e-01 -3.51829886e-01 -1.51566312e-01 3.19048643e-01 1.42810777e-01 3.39679480e-01 -1.44272953e-01 7.85561681e-01 -3.36977839e-01 2.64188260e-01 1.31466910e-01 -1.54473707e-01 -3.54027689e-01 -1.93621725e-01 -3.35431397e-01 -4.76616234e-01 2.06029937e-02 -3.50014389e-01 1.81897417e-01 8.24268311e-02 -2.30000913e-02 -2.42241010e-01 4.12289798e-02 7.83494562e-02 -1.72393918e-01 2.17643529e-01 -2.60747671e-02 -4.16156232e-01 3.03777814e-01 -5.40386960e-02 6.67198151e-02 4.56765771e-01 3.69238377e-01 1.72009140e-01 3.90102506e-01 -2.16501951e-03 -3.63569856e-02 1.27335161e-01 -4.36197102e-01 6.97868913e-02 1.27704307e-01 -4.65597659e-02 -2.03634501e-02 -8.04828554e-02 -2.23698825e-01 9.33380425e-02 2.99127866e-02 1.32678315e-01 -4.50977460e-02 5.54340482e-01 -4.47369814e-01 -1.33852601e-01 -2.27702424e-01 -5.08142710e-02 3.23214084e-02 -4.97159541e-01 -6.76678792e-02 -1.18632764e-02 -3.19975317e-01 6.18933976e-01 -8.35424006e-01 -2.17268616e-01 5.38547099e-01 7.82708168e-01 -7.28415847e-01 -3.76167625e-01 7.20639646e-01 -2.98585370e-02 3.13809663e-01 3.75187397e-01 2.95831412e-01 7.43884027e-01 2.41576448e-01 -3.37805748e-02 3.47715080e-01 3.96632254e-01 1.10012099e-01 -6.08244777e-01 2.12903321e-01 -1.63437888e-01 -7.51893967e-02 -9.13728237e-01 -4.68309462e-01 3.74763846e-01 2.36880124e-01 4.99360204e-01 -1.27683178e-01 -1.62899733e-01 -4.91339564e-02 1.81253344e-01 -2.63938606e-01 3.13691497e-01 1.07467033e-01 -7.42356539e-01 -2.58050889e-01 -3.22008431e-01 6.78506076e-01 -1.45394772e-01 2.70696759e-01 3.50923687e-01 -7.57573247e-02 -1.38874620e-01 -1.54216737e-01 1.37292415e-01 -7.87530318e-02 -5.00483513e-02 5.29465675e-01 -7.31449962e-01 5.23882926e-01 -4.19330299e-01 -9.64103192e-02 -7.04261661e-01 3.08752805e-02 3.94386828e-01 2.87204802e-01 4.62714702e-01 5.20519018e-01 2.81967223e-03 -1.11178473e-01 3.29341650e-01 -4.91796076e-01 -1.48085549e-01 3.37434769e-01 -5.45709729e-01 -2.42450729e-01 -2.33173251e-01 -1.59645155e-01 5.44530272e-01 -2.20042616e-01 4.29684460e-01 -2.49880582e-01 6.03968561e-01 -3.21873307e-01 -1.12634033e-01 1.81830376e-01 -3.89184594e-01 -8.15919116e-02 -6.89003021e-02 7.63039812e-02 1.95880234e-02 -1.96111768e-01 3.62800628e-01 4.66887712e-01 5.83167613e-01 -1.32455155e-01 -1.76099494e-01 1.39247552e-01 2.91690230e-04 2.45634049e-01 7.26491213e-04 -2.00259775e-01 -2.07394928e-01 5.47502339e-01 1.02231331e-01 1.24914005e-01 -9.48404223e-02 2.15078831e-01 -2.26227790e-01 -5.45244068e-02 5.22263587e-01 1.91077724e-01 -3.76865506e-01 -2.76545167e-01 -1.89595520e-02 1.13320798e-01 4.43316698e-01 -7.80223310e-03 6.66806459e-01 -3.06726873e-01 2.46964782e-01 7.84370899e-02 -1.28809810e-01 -5.39565802e-01 -1.98737055e-01 -1.61629170e-01 -3.02407473e-01 -9.33168381e-02 -3.91638517e-01 1.05690464e-01 1.98325083e-01 1.73466951e-02 -1.78393722e-01 1.25586331e-01 1.71870857e-01 -1.55802384e-01 2.79657423e-01 8.37599337e-02 -5.50163031e-01 2.16353491e-01 -1.55411944e-01 3.19299370e-01 3.90772939e-01 3.30768943e-01 3.21978390e-01 3.36048454e-01 -8.26082975e-02 -2.19058156e-01 1.17783785e-01 -4.63917285e-01 1.32965475e-01 3.70263979e-02 4.70926464e-02 -1.88485935e-01 3.04759294e-02 -4.24536645e-01 -1.19624823e-01 6.26472831e-02 7.19535574e-02 -1.17991343e-01 7.79137611e-01 -3.08348000e-01 -2.85789147e-02 -1.23305023e-01 -8.19939896e-02 -1.82029188e-01 -4.57080245e-01 -1.45158798e-01 2.03296900e-01 -3.76549602e-01 8.43308747e-01 -6.78866088e-01 -2.71210581e-01 3.46639276e-01 5.15377283e-01 -7.02733040e-01 -3.25193912e-01 7.90245235e-01 1.20106429e-01 1.75307781e-01 4.03955936e-01 2.31727779e-01 4.24922645e-01 3.54552150e-01 -7.68869817e-02 2.86569037e-02 2.98314989e-01 1.87978402e-01 -3.57333481e-01 -5.52355777e-03 1.02877691e-01 2.87557989e-02 -7.47255445e-01 -2.54889637e-01 2.26086974e-01 1.32456273e-01 3.13082397e-01 1.53662413e-02 -2.44875416e-01 5.08328080e-02 -3.98641359e-03 -2.89956808e-01 2.00657517e-01 -6.49997815e-02 -6.43398166e-01 -2.27469847e-01 -2.91537702e-01 5.35060704e-01 -1.61430389e-01 2.72467971e-01 1.79417342e-01 -9.78902429e-02 -1.58466801e-01 -9.94352624e-02 2.16749936e-01 -3.77055258e-03 1.51243865e-01 3.97949338e-01 -7.20643342e-01 5.21070600e-01 -2.63261348e-01 -1.96940526e-01 -5.89750171e-01] ```<|||||>A quicker way to reproduce the error is actually to do ``` $ python -m transformers.onnx --model=microsoft/swin-base-patch4-window12-384-in22k test_onnx/from_hub/ --feature='image-classification' --preprocessor=feature_extractor Validating ONNX model... -[✓] ONNX model output names match reference model ({'logits'}) - Validating ONNX Model output "logits": -[✓] (3, 21841) matches (3, 21841) -[x] values not close enough (atol: 0.0001) Traceback (most recent call last): File "/Users/bleguay/.pyenv/versions/3.10.0/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/bleguay/.pyenv/versions/3.10.0/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/Users/bleguay/.pyenv/versions/piraeus-model/lib/python3.10/site-packages/transformers/onnx/__main__.py", line 180, in <module> main() File "/Users/bleguay/.pyenv/versions/piraeus-model/lib/python3.10/site-packages/transformers/onnx/__main__.py", line 173, in main validate_model_outputs(onnx_config, preprocessor, model, args.output, onnx_outputs, args.atol) File "/Users/bleguay/.pyenv/versions/piraeus-model/lib/python3.10/site-packages/transformers/onnx/convert.py", line 455, in validate_model_outputs raise ValueError( ValueError: Outputs values doesn't match between reference model and ONNX exported model: Got max absolute difference of: 5.501317977905273 for [-0.7436858 1.7425342 -0.70307475 ... -0.6807119 -0.511953 -0.6309131 ] vs [-1.0538013 1.1691334 -1.2402858 ... 0.5939584 -0.33656883 -1.2150081 ] ``` I also found that: https://github.com/huggingface/transformers/pull/19390/ Hope this helps.<|||||>Hello everyone, any news on this topic ? I don't want to be too pushy, but could I have a quick follow up ? Thanks 🤗 <|||||>cc @lewtun would be great if you could take a look at this<|||||>Thank you @NielsRogge <|||||>I fully understand that this might not be your current priority. Though, could I have some information about when should I expect some follow up on this ? I count on this ONNX format for production deployment purposes and I'd like to be able to organize and communicate regarding my roadmap. Thanks again for you work. 🤗 <|||||>@BenoitLeguay Let me have a look.<|||||>@BenoitLeguay I can't reproduce the issue with `python -m transformers.onnx --model microsoft/swin-base-patch4-window12-384-in22k swin-bas-onnx/ --feature image-classification`. Is it always failing for you? With: ``` transformers==4.24.0 torch==1.12.1+cu113 onnx==1.12.0 onnxruntime==1.12.0 ``` Could you try with these versions? Or give more details on your setup / a dockerfile to reproduce? I notice that the ONNX export gives a lot of warnings, so it could be the exported models can not handle certain dynamic cases. It is part of https://github.com/huggingface/optimum/issues/503 to have a better user experience with exported ONNX models and the case they support / dont' support. You can expect more work being put on the ONNX export in the Optimum lib (doc [here](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model)), notably a stronger test suite for the exported models.<|||||>First, thanks a lot @fxmarty ! Indeed, on my new computer (not the one I used from the time I had this issue) it works with the libraries versions you specified. I'm currently not able to access my old computer but I'll share more information about my setup whenever I can. I'm looking forward to check optimum improvements, thanks for sharing !<|||||>Ok great! Don't hesitate to share if with your old laptop + `transformers==4.24.0` you can reproduce the issue. Otherwise it could be that it has been fixed in this version (maybe https://github.com/huggingface/transformers/pull/19475 )<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello, sorry for late response. As I returned my old laptop, I cannot reproduce the issue. Though, I was able to retrieve the set up that was causing this issue 🔴 : `transformers == 4.23.1 / torch == 1.12.1 / onnx = 1.12.0` MacBookPro 2016, MacOS Monterrey 12.5. It's now working well on this set up 🟢 : `transformers == 4.25.1 / torch == 1.12.1 / onnx = 1.13.0` MacBookPro 2021, MacOS Monterrey 12.5. Thank you for your time huggingface team :)
transformers
19,779
closed
Added translation of custom_models.mdx to Portuguese Issue #16824
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) Currently, only the custom_models.mdx file was translated as of this PR. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-21-2022 00:54:27
10-21-2022 00:54:27
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hello @omarespejel, can you check if everything is ok in this PR?
transformers
19,778
closed
[WIP] Update model addition guide
This PR updates the contribution guide for how to add a model since I don't think we do the *call-for-model-addition* thing anymore. Also reworked the section headers a bit so certain steps like "write a conversion script" can actually be linked to, and also made some edits for clarity. Please let me know if any of the other content is outdated (also cc @NielsRogge since you've added many models before!). The last thing I need to do is update the `transformers_overview.png` to be less dense :)
10-20-2022 21:13:42
10-20-2022 21:13:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_19778). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for your PR. It's going to take me too long to check step by step that the actual technical content is unchanged, and I don't think this is a crucial part of our docs to update as not that many users contribute new models, and no one reported they had trouble understanding this guide. So I'd just leave the current version as is or focus on one/two fixes you feel are really important.<|||||>Sounds good! The fixes I'll target are: 1. Update the intro to remove the _call-for-model-addition_. 2. Update that `cookiecutter` generates a `.mdx` file instead of a `.rst` file. 3. Fix the checklist so it doesn't have bullets and numbers. What do you think about creating a new PR with these fixes and leaving this one here in case we come back to it in the future?<|||||>Sure!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing for now, saving larger changes for later in the future :)
transformers
19,777
closed
Request: Providing gen_kwargs for Seq2SeqTrainer
### Feature request As far as I am aware, there is currently no way to customize the generation behavior when generation is run as part of the training loop. `Seq2SeqTrainer.predict()` takes a `gen_kwargs` argument to pass along to `generate()`, but there is no way to pass `gen_kwargs` to `predict()` when predict is called automatically as part of the training loop. ### Motivation Customizing generation may be necessary, doing this in the training loop as opposed to after training would be ideal. ### Your contribution I'm happy to contribute, the main question is how this could be elegantly integrated into the current Trainer API.
10-20-2022 19:36:49
10-20-2022 19:36:49
May be of interest to @gante @sgugger <|||||>You can customize any generation kwarg by passing them to the model configuration (soon there will be a generation configuration that will make this clearer).<|||||>Ah I wasn't aware of that. It doesn't have all the generation parameters though, in my case I wanted to use prefix_allowed_tokens_fn, which model config doesn't seem to have. It's easily solved with a trainer subclass for me though, and it's probably a specific enough use case that the API doesn't need to be modified. Since the most common params can be used that way, I'll consider this resolved.<|||||>@atyshka I have the same issue using `prefix_allowed_tokens_fn` for Seq2SeqTrainer. Can you share how you solved it through trainer subclass?
transformers
19,776
closed
Trying to implement Transformer-XL using PyTorch Lightning
### System Info ```shell - `transformers` version: 4.22.2 - Platform: Linux-5.15.0-50-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.10.0 - PyTorch version (GPU?): 1.11.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Code: ``` import itertools import math import os import sys import time # third party libraries import datasets import numpy as np import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader import pytorch_lightning as pl from transformers import TransfoXLModel, TransfoXLTokenizer, TransfoXLConfig class LitMemTransformerLMWT103(pl.LightningModule): """Memory Transformer for Attentive Language Models on WikikText 103""" def __init__(self, model_name_or_path, **kwargs) -> None: """Initialize the model""" super().__init__(**kwargs) self.configuration = TransfoXLConfig() self.tokenizer = TransfoXLTokenizer.from_pretrained("transfo-xl-wt103") self.model = TransfoXLModel.from_pretrained(model_name_or_path, config=self.configuration) def forward(self, **inputs): return self.model(**inputs) def training_step(self, batch): outputs = self(**batch) loss = outputs[0] return loss def train_dataloader(self) -> DataLoader: dm = datasets.load_dataset('wikitext', 'wikitext-103-v1') return DataLoader(dm['train'], batch_size=64, num_workers=self.config.cpu_count) def configure_optimizers(self) -> torch.optim.Adam: """Use the Adam torch optimizer.""" optimizer = torch.optim.Adam(self.model.parameters(), lr=self.config.learning_rate) return optimizer ``` Error: ``` Traceback (most recent call last): File "/opt/moksh/venv/lib/python3.8/site-packages/systest/src/lib/agent/mixin/torchtrain.py", line 471, in <module> torch_train() # pylint: disable=no-value-for-parameter File "/opt/moksh/venv/lib/python3.8/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/opt/moksh/venv/lib/python3.8/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/moksh/venv/lib/python3.8/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/systest/src/lib/agent/mixin/torchtrain.py", line 460, in torch_train trained_module, profile = train(params) File "/opt/moksh/venv/lib/python3.8/site-packages/systest/src/lib/agent/mixin/torchtrain.py", line 346, in train trainer.fit(module) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit self._call_and_handle_interrupt( File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 723, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl results = self._run(model, ckpt_path=self.ckpt_path) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1236, in _run results = self._run_stage() File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1323, in _run_stage return self._run_train() File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1353, in _run_train self.fit_loop.run() File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 266, in advance self._outputs = self.epoch_loop.run(self._data_fetcher) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 208, in advance batch_output = self.batch_loop.run(batch, batch_idx) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 203, in advance result = self._run_optimization( File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 256, in _run_optimization self._optimizer_step(optimizer, opt_idx, batch_idx, closure) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 369, in _optimizer_step self.trainer._call_lightning_module_hook( File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1595, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1646, in optimizer_step optimizer.step(closure=optimizer_closure) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 286, in optimizer_step optimizer_output = super().optimizer_step(optimizer, opt_idx, closure, model, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 193, in optimizer_step return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 155, in optimizer_step return optimizer.step(closure=closure, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/optim/optimizer.py", line 88, in wrapper return func(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/optim/adam.py", line 100, in step loss = closure() File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 140, in _wrap_closure closure_result = closure() File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 148, in __call__ self._result = self.closure(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 134, in closure step_output = self._step_fn() File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 427, in _training_step training_step_output = self.trainer._call_strategy_hook("training_step", *step_kwargs.values()) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1765, in _call_strategy_hook output = fn(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 349, in training_step return self.model(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 963, in forward output = self.module(*inputs[0], **kwargs[0]) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 82, in forward output = self.module.training_step(*inputs, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/systest/src/lib/agent/mixin/ml/lit_memxfmr_wt103.py", line 35, in training_step outputs = self(**batch) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/systest/src/lib/agent/mixin/ml/lit_memxfmr_wt103.py", line 32, in forward return self.model(**inputs) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1128, in _call_impl result = forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'text' Traceback (most recent call last): File "/opt/moksh/venv/lib/python3.8/site-packages/systest/src/lib/agent/mixin/torchtrain.py", line 471, in <module> torch_train() # pylint: disable=no-value-for-parameter File "/opt/moksh/venv/lib/python3.8/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/opt/moksh/venv/lib/python3.8/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/opt/moksh/venv/lib/python3.8/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/systest/src/lib/agent/mixin/torchtrain.py", line 460, in torch_train trained_module, profile = train(params) File "/opt/moksh/venv/lib/python3.8/site-packages/systest/src/lib/agent/mixin/torchtrain.py", line 346, in train trainer.fit(module) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 770, in fit self._call_and_handle_interrupt( File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 721, in _call_and_handle_interrupt return self.strategy.launcher.launch(trainer_fn, *args, trainer=self, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 93, in launch return function(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 811, in _fit_impl results = self._run(model, ckpt_path=self.ckpt_path) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1236, in _run results = self._run_stage() File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1323, in _run_stage return self._run_train() File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1353, in _run_train self.fit_loop.run() File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 266, in advance self._outputs = self.epoch_loop.run(self._data_fetcher) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 208, in advance batch_output = self.batch_loop.run(batch, batch_idx) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance outputs = self.optimizer_loop.run(split_batch, optimizers, batch_idx) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run self.advance(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 203, in advance result = self._run_optimization( File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 256, in _run_optimization self._optimizer_step(optimizer, opt_idx, batch_idx, closure) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 369, in _optimizer_step self.trainer._call_lightning_module_hook( File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1595, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/core/lightning.py", line 1646, in optimizer_step optimizer.step(closure=optimizer_closure) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 168, in step step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 286, in optimizer_step optimizer_output = super().optimizer_step(optimizer, opt_idx, closure, model, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 193, in optimizer_step return self.precision_plugin.optimizer_step(model, optimizer, opt_idx, closure, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 155, in optimizer_step return optimizer.step(closure=closure, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/optim/optimizer.py", line 88, in wrapper return func(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/optim/adam.py", line 100, in step loss = closure() File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 140, in _wrap_closure closure_result = closure() File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 148, in __call__ self._result = self.closure(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 134, in closure step_output = self._step_fn() File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 427, in _training_step training_step_output = self.trainer._call_strategy_hook("training_step", *step_kwargs.values()) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1765, in _call_strategy_hook output = fn(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 349, in training_step return self.model(*args, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 963, in forward output = self.module(*inputs[0], **kwargs[0]) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 82, in forward output = self.module.training_step(*inputs, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/systest/src/lib/agent/mixin/ml/lit_memxfmr_wt103.py", line 35, in training_step outputs = self(**batch) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/opt/moksh/venv/lib/python3.8/site-packages/systest/src/lib/agent/mixin/ml/lit_memxfmr_wt103.py", line 32, in forward return self.model(**inputs) File "/opt/moksh/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1128, in _call_impl result = forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'text' Exception ignored in: <function Profiler.__del__ at 0x7fa584551670> Traceback (most recent call last): File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/profiler/profiler.py", line 170, in __del__ File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/profiler/pytorch.py", line 509, in teardown File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/profiler/pytorch.py", line 493, in _delete_profilers File "/opt/moksh/venv/lib/python3.8/site-packages/torch/profiler/profiler.py", line 456, in __exit__ File "/opt/moksh/venv/lib/python3.8/site-packages/torch/profiler/profiler.py", line 467, in stop File "/opt/moksh/venv/lib/python3.8/site-packages/torch/profiler/profiler.py", line 493, in _transit_action File "/opt/moksh/venv/lib/python3.8/site-packages/torch/profiler/profiler.py", line 98, in start_trace File "/opt/moksh/venv/lib/python3.8/site-packages/torch/profiler/profiler.py", line 175, in _get_distributed_info ImportError: sys.meta_path is None, Python is likely shutting down Exception ignored in: <function Profiler.__del__ at 0x7fa05a25b670> Traceback (most recent call last): File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/profiler/profiler.py", line 170, in __del__ File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/profiler/pytorch.py", line 509, in teardown File "/opt/moksh/venv/lib/python3.8/site-packages/pytorch_lightning/profiler/pytorch.py", line 493, in _delete_profilers File "/opt/moksh/venv/lib/python3.8/site-packages/torch/profiler/profiler.py", line 456, in __exit__ File "/opt/moksh/venv/lib/python3.8/site-packages/torch/profiler/profiler.py", line 467, in stop File "/opt/moksh/venv/lib/python3.8/site-packages/torch/profiler/profiler.py", line 493, in _transit_action File "/opt/moksh/venv/lib/python3.8/site-packages/torch/profiler/profiler.py", line 98, in start_trace File "/opt/moksh/venv/lib/python3.8/site-packages/torch/profiler/profiler.py", line 175, in _get_distributed_info ImportError: sys.meta_path is None, Python is likely shutting down Epoch 0: 0%| | 0/14074 [00:04<?, ?it/s] ``` ### Expected behavior ```shell I expect the model to train. ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
10-20-2022 19:36:08
10-20-2022 19:36:08
Hi @moksh-enf. Could you please share the code snippet on how you are passing input while training, and also share some data examples (if possible)?<|||||>> Hi @moksh-enf. Could you please share the code snippet on how you are passing input while training, and also share some data examples (if possible)? Hi @uahmad235 , the input is passed from a config file, where as the input data I believe is being downloaded within the code.<|||||>> Hi @moksh-enf. Could you please share the code snippet on how you are passing input while training, and also share some data examples (if possible)? Hey @uahmad235, inputs are being passed via a config file. The data that I am using is from the datasets maintained by hugging face (wikitext-103-v1)<|||||>Hi @moksh-enf . For more context, can you post the code where you are using the `LitMemTransformerLMWT103` class ?<|||||>> Hi @moksh-enf . For more context, can you post the code where you are using the `LitMemTransformerLMWT103` class ? Hey @uahmad235, I am passing a params file which checks if the module name and chooses the appropriate model ![image](https://user-images.githubusercontent.com/105234138/199767348-e6942409-e8c8-49d7-9d85-36931a113a86.png) These are all the arguments that go into the params file ![image](https://user-images.githubusercontent.com/105234138/199768073-b0038980-4856-4a41-bf80-03dfbf254292.png) <|||||>Hi @moksh-enf. Could you also attach the config file you are passing? Sorry for the late response.<|||||>> Hi @moksh-enf. Could you also attach the config file you are passing? Sorry for the late response. Hey @uahmad235 do you mean you want to know the data that is being passed to the model? How does that help with the error? I believe that the error is in the forward pass of the PyTorch code.<|||||>I mean the params file. Based on the information you have provided, I am unable to regenerate this issue.<|||||>> I mean the params file. Based on the information you have provided, I am unable to regenerate this issue. Are you saying you are able to run the model from hugging face?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
19,775
closed
Make public versions of private tensor utils
# What does this PR do? This PR makes public versions of `_is_torch`, `_is_tensorflow`, `_is_numpy` and `_is_jax` that are safe to use even when those frameworks are not installed, as I need them for some future PRs. The same functions have also been written independtenly by @amyeroberts in image_utils, this PR puts them in the generic utils and remove the use of the private ones.
10-20-2022 19:17:00
10-20-2022 19:17:00
transformers
19,774
closed
Add warning about restarting runtime to import errors
This PR expands the warnings shown for missing dependencies to remind users that they may need to restart their runtime after installing them. cc @sgugger and @swap-10
10-20-2022 17:43:31
10-20-2022 17:43:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,773
closed
TF: sample generation compatible with XLA and dynamic batch sizes
# What does this PR do? Fixes #19747 (sample generation failing with XLA and dynamic batch sizes) There were two distinct problems: - `_expand_inputs_for_generation` was crashing when the input was a `TensorSpec` with dynamic batch size (used e.g. with tf.Serving). Fix: `tensor.shape` -> `tf.shape` - Obtaining a new seed with `tf.random.Generator` is broken with `tf.function` for two distinct reasons: 1) the protection we added [here](https://github.com/huggingface/transformers/pull/18044/files) makes unseeded `sample` crash unless we call `sample` with eager execution first (see [here](https://www.tensorflow.org/api_docs/python/tf/random/set_global_generator) why); 2) TF 2.10 + XLA crashes on the line that instantiates it. Fix: replace TF-suggested method to generate seed with `tf.random.Generator` with `tf.experimental.numpy.random.randint`, since we were creating a Generator from a non-deterministic state anyways 🤷 Notes: the new `tf.experimental.numpy.random.randint` strategy seems to be slightly faster (23.5 ms -> 21.6 ms per generate call on a benchmark with XLA, nvidia 3090, and `t5-small`) and is retrocompatible up to TF 2.4.0.
10-20-2022 14:49:59
10-20-2022 14:49:59
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,772
closed
Run some TF Whisper tests in subprocesses to avoid GPU OOM
# What does this PR do? Run some TF Whisper tests in subprocesses to avoid GPU OOM (in the PT tests, which run after the TF tests) After TF loading `openai/whisper-large`, it takes almost all GPU and won't release it, causing some PT tests GPU OOM later. Current failing CI report [here](https://github.com/huggingface/transformers/actions/runs/3278475639/jobs/5396961322)
10-20-2022 12:22:19
10-20-2022 12:22:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger There are 2 considerations: - `openai/whisper-large` is 6 GB (as file), quite large. - Also, for `Whisper` model, `TFWhisper` tests run before (Pytorch) `Whisper` tests, due to the `_tf_whisper.py` and `_whisper.py` order (`t` vs `w`). If this order was reversed (and with potentially torch.cuda.empty.cache), we might get rid of the GPU OOM. I think this is a combination of the ordering plus very large model which cause the problem. For other models, either PT runs before TF tests, or the models are not large enough to cause problems.<|||||>I'm very reluctant to skip those slow tests though, so anything else you might think of to fix the issue would be better.<|||||>Close for now while try to find a way to make @sgugger happy 🧠 <|||||>@sgugger Could you take a quick look (no need review in detail) if this hacky way of using child process to run the actual testing is acceptable. The downside is: this makes the debugging more difficult (can't set `pdb` in the child process, otherwise fail). Maybe there is some workaround to do the debugging in this case though. If this is acceptable, I can apply the same change to other TF test methods that use large checkpoints.<|||||>There is something `execute_subprocess_async` in `transformers`, but my first impression is it's mainly for launching a python command (training script for example). It won't make things easier here IMO. Also, the target function (i.e. the child process to run) can't have `self` (i.e. the `TestCase` collected by `pytest`), otherwise it hangs, so it should go outside the test class (I haven't done extensive research on this, just my experience). Therefore, a `decorator` doesn't seems possible. Since there are very few such use cases, we can focus on current approach, then see if there is a way to make the process easier :-) 🙏 <|||||>SGTM!<|||||>close for now to not run the tests during dev.<|||||>@sgugger @amyeroberts Ready for review :-)<|||||>After trying to do with `decorator` (for target function running in subprocess), I find the original `run_test_in_subprocess` as a simple function is much cleaner. Otherwise, we will end up to prepare the following in `test_large_logits_librispeech` (and in every involved test methods), and pass them to `_test_large_logits_librispeech` ```python start_methohd = "spawn" ctx = multiprocessing.get_context(start_methohd) input_queue = ctx.Queue(1) output_queue = ctx.JoinableQueue(1) ``` I would prefer all these constructions being done just once in a single place. Furthermore, while inside the decorator, we still need the `ctx` in order to do `process = ctx.Process`, which I believe it should be the same `ctx` as the one ` ctx = multiprocessing(...)` shown above, but it is not passed to `_test_large_logits_librispeech` and not available in the `wrapper`. Therefore I will merge as it is 🙏 . If you have any better approach, we can discuss in another PR, thank you 🙏 <|||||>FYI: due to the overhead (2 python processes with TF), the full test suite for this model will fail at `test_large_batched_generation`. I have to split the batch into 2 smaller batches to pass that test, see the last 2 commits. (when running some subset of the test suite, we don't need to split this way)
transformers
19,771
closed
TFPreTrainedModel.prepare_tf_dataset() gives ImportError: Datasets module not found; but it is installed and usable (load_dataset() works)
### System Info Google Colab CPU Instance - `transformers` version: 4.23.1 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu113 (False) - Tensorflow version (GPU?): 2.9.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: Yes (However, the issue occurs even when not using parallel setup) ### Who can help? @LysandreJik ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I was trying to follow along [this nb from the notebooks repo](https://github.com/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb) This is the [Colab link](https://colab.research.google.com/drive/13OGs8d2N-bAirSWslbNx6kffttwdKXJg?usp=sharing) of my attempt, with the outputs (and the error) ### Expected behavior Detect that the Datasets package is present and use it as required. Maybe there's a really silly mistake I'm making but I really cannot figure out what it could be and I couldn't find any similar results online. Thanks in advance!
10-20-2022 12:15:21
10-20-2022 12:15:21
cc @Rocketknight1 <|||||>I reproduced the error here, working on it now! I suspect this is caused by the recent release of a new version of `datasets`<|||||>Hi, I've diagnosed the issue. It is not related to TF or `prepare_tf_dataset` at all. Instead, the problem is caused by the following: 1) `pip install transformers` followed by importing it, then by initializing one or more model classes 2) `pip install datasets` **after** we've imported and used a class from `transformers` If you do this, `transformers.is_datasets_available()` returns `False`. I believe the underlying reason is that `transformers` imports some libraries (most likely `urllib3`), and then even if `datasets` installs newer versions, they cannot be re-imported until you restart your session. @swap-10 As a short-term workaround, you can simply install `transformers` and `datasets` together in the first block, and then your code should work fine. cc @lhoestq and @sgugger - can we update the pinned version of `urllib3` for `transformers` to stop this happening? Here's [a short Colab](https://colab.research.google.com/drive/1Ydi_fJe4xtL9zV3PZXrbG8iZUZP9P_0N?usp=sharing) to reproduce the issue. The issue occurs both when I install `transformers` from `main` or when I use the latest release version.<|||||>Update: Manually pinning urllib3 in the colab didn't work - the problem is another library. Trying to figure it out!<|||||>Wait, this is just me being stupid. This is caused by us caching `_datasets_available` in `import_utils`. As a result, this will always happen if you import and use transformers classes before you `pip install datasets` because this will cache a value of `False` until you restart your runtime. There's no bug to fix here in `transformers` - @swap-10 you just need to install `datasets` before importing `transformers`!<|||||>Thanks for digging into this @Rocketknight1 !<|||||>> Wait, this is just me being stupid. This is caused by us caching `_datasets_available` in `import_utils`. As a result, this will always happen if you import and use transformers classes before you `pip install datasets` because this will cache a value of `False` until you restart your runtime. There's no bug to fix here in `transformers` - @swap-10 you just need to install `datasets` before importing `transformers`! Interesting. Thank you! I used Dataset.to_tf_dataset() meanwhile to get my work done. However, I guess this isn't exactly anticipatable behaviour? Perhaps this should at least be mentioned somewhere in the documentation? (if it isn't already)<|||||>It's quite niche, but I agree it can be confusing for users. @sgugger I could make a PR to change the warning messages to ask users to restart their runtimes if they installed a dependency after importing `transformers`, WDYT?<|||||>If you can catch it properly, why not yes!<|||||>Closing this issue now that we've added the user warnings about restarting the runtime
transformers
19,770
closed
Change the import of kenlm from github to pypi
# What does this PR do? Fixes #19686 ## Who can review? @patrickvonplaten
10-20-2022 12:02:40
10-20-2022 12:02:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for your PR. The main interest of having the package on PiPy is that we can put it in our setup and not have to write a specific command to install it. You should add it to the extras for audio. Done<|||||>Please remove all manual install of `kenlm`. I don't know if I was unclear in my previous comments, but putting it in the setup will make it installed when we do `pip install -e .[xxx]`, so there is no need for a separate install line.<|||||>> Please remove all manual install of `kenlm`. I don't know if I was unclear in my previous comments, but putting it in the setup will make it installed when we do `pip install -e .[xxx]`, so there is no need for a separate install line. Oh, I misunderstood who you put in earlier comment. now removed.<|||||>I think we still have 2 files to be clean up ? - docker/transformers-pytorch-gpu/Dockerfile - docker/transformers-doc-builder/Dockerfile
transformers
19,769
closed
Add sentencepiece to BertJapaneseTokenizer
# What does this PR do? This PR adds a class to use [sentencepiece](https://github.com/google/sentencepiece) as subword tokenizer with `BertJapaneseTokenizer`. Thanks to https://github.com/huggingface/transformers/pull/19043, it's more convenient now to use Japanese models in transformers. However, our model(https://huggingface.co/nlp-waseda/roberta-base-japanese, currently SOTA MLM model in Japanese) was trained by sentencepiece, and `BertJapaneseTokenizer` currently only supports WordPiece. There is only one last step to use our model without any data preprocessing, that's sentencepiece, so we added it. We also found https://github.com/huggingface/transformers/pull/19043 did not update the documentation, if it's necessary, we can update the documentation for these 2 PR together. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Models: @LysandreJik Library: @n1t0 Documentation: @sgugger @r-terada @hiroshi-matsuda-rit If you have time, welcome to review this PR (They are the last contributors of `BertJapaneseTokenizer` ).
10-20-2022 11:21:50
10-20-2022 11:21:50
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR. The Transformers library is not a modular toolbox however and a model called `BertJapanese` should have the same tokenizer as BERT for obvious reasons. To use another tokenizer you can simply put the tokenizer code in the repo of your model, using our [custom code feature](https://huggingface.co/docs/transformers/custom_models).<|||||>@sgugger Thanks for your reply. We believe sentencepiece is as important as [sudachi](https://github.com/WorksApplications/SudachiPy) and [jumanpp](https://github.com/ku-nlp/pyknp) which are also not a part of origin BERT tokenizer. Since there will be more sentencepiece-based Japanese models in the futures, if we use the custom code feature, it's will be difficult to maintain a lot of model repos.<|||||>Hi @sgugger, In [our blog post (in Japanese)](https://www.megagon.ai/jp/blog/autotokenizer/), we explained how `trust_remote_code=True` becomes an unavoidable barrier for utilizing the custom Japanese transformers models on the application side. As you mentioned above, we should consider not making `BertJapanese` a messy toolbox. I think the answer is generalizing `SentencePiece` as universal sub-word tokenizer in huggingface/transformers like [tokenizers/sentencepiece_bpe](https://github.com/huggingface/tokenizers/blob/96a9e5715c5e71ddc26f36fc456c95d729b23923/bindings/python/py_src/tokenizers/implementations/sentencepiece_bpe.py#L10). Actually, many Japanese trasnformers models are trained with [AlbertTokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/albert/tokenization_albert.py) to subtokenize the pre-tokenized input text by `SentencePiece`. We want to find a way to combine `mecab`, `sudachi`, or `jumanpp` (implemented in BertJapaneseTokenizer) and `SentencePiece` (implemented in AlbertTokenizer) like [tokenizers/pre_tokenizers.Sequence](https://github.com/huggingface/tokenizers/blob/main/bindings/python/py_src/tokenizers/pre_tokenizers/__init__.py#L11). Do you have any good ideas?<|||||>I agree with @sgugger that we don't want the library to be a modular toolbox, and this PR goes against this philosophy. However, I think the file is already modular, and the changes here directly align with the existing code. I would be fine with merging this PR, but I think we should consider splitting this file into separate components instead of having it be configurable as it is right now.<|||||>@sgugger Thanks for checking. I have fixed the breaking change and removed the fixture.
transformers
19,768
closed
Fix image segmentation pipeline errors, resolve backward compatibility issues
# What does this PR do? - Replaces hard-coded mask score binarization threshold used in `post_process_panoptic_segmentation` and `post_process_instance_segmentation` method of DETR and MaskFormer feature extractors with the `mask_threshold` argument - Adds copied from statements to MaskFormer post processing helper functions - Reintroduces the mask_threshold argument to the `ImageSegmentationPipeline` (resolves backward compatibility issues) - Renames `ImageSegmentationPipeline` task argument to subtask to avoid conflicts, defaults to None - If `ImageSegmentationPipeline` subtask is unset, tries performing panoptic segmentation first, followed by instance and semantic segmentation - Fixes broken `ImageSegmentationPipeline` tests Reintroducing the `mask_threshold` argument and setting it to 0 fixes the test errors and ensures at least one mask is returned for any input and model. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
10-20-2022 10:57:48
10-20-2022 10:57:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
19,767
closed
[PoC] Add resources to each model's doc page
# What does this PR do? This PR is a small PoC to add resources (notebooks, scripts, blogs, etc.) to each model's doc page. Ideally (not sure if it's possible), but it'd be great if we could (partially) automate this. The idea being that, if a new model is added to the FOR_IMAGE_CLASSIFICATION_MAPPING for instance, the doc page automatically will add a link to the image classification notebooks and scripts. Not sure if this is possible, cc @sgugger.
10-20-2022 10:03:16
10-20-2022 10:03:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>One question I have: does it matter if the resources diverge from the API described in the docs?<|||||>@stevhliu, I think this is a pretty significant improvement for the model pages. Would you be down to try doing something similar for other high-profile architectures? If so, I would encourage the following: - Identifty the 20 most popular model architectures, either through number of downloads or through doc pages views - Open an issue to track the update of these 20 most popular architectures. - Start working on some of them one at a time - Open this up to community contributions Would you be down to kickstart such a project? In doing so, I'm sure we'll have more visibility over what we can automate, and what we cannot automate. I'm pretty sure we can automate a portion of it but I don't think we can have high-quality resources here only using automation.<|||||>Closing this PR as it was just a PoC. Will continue the work in other PRs