repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
21,774
closed
[WIP] Add Seaformer
<!-- Remove if not applicable --> # What does this PR do? Fixes #21668 Seaformer is a two-branch architecture with Squeeze enhanced Axial Transformer. <br> Initialized as **tokenizer_type** = Standalone <br> **is_encoder_decoder_model** = False, since its encoder only. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? #21668 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @alaradirik thanks for offering help with this PR, please let me know about any changes required. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-24-2023 05:34:15
02-24-2023 05:34:15
Hi @inderpreetsingh01, SeaFormer is a mobile-friendly semantic segmentation model but I see that the PR is for a language model? For reference, the best way to add a new model is to identify a similar model in the library (I'd say SegFormer in this case), create a new branch and initialize the files with `transformers-cli add-new-model-like`. You can refer to [this page](https://github.com/huggingface/transformers/blob/main/templates/adding_a_new_model/README.md) for more information.<|||||>Thanks @alaradirik i have done the changes you mentioned (initialized model as SegFormer) in new branch and raised the new PR #21819.
transformers
21,773
closed
[ProphetNet] Fix gradient checkpointing bug
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #21737 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. -- #21737 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-24-2023 01:52:45
02-24-2023 01:52:45
the code quality seems to have failed due to some other files that were not changed, but could anyone please confirm if that is the case?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante <|||||>Hey @yhl48 -- same comment as in [here](https://github.com/huggingface/transformers/pull/21772#issuecomment-1443568457) (and the other PR has to be merged first) :)<|||||>(#21772 contains the changes here, closing this PR)
transformers
21,772
closed
[GPT2] Fix gradient checkpointing bug
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #21737 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. -- #21737 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-24-2023 00:56:04
02-24-2023 00:56:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante <|||||>Hey @yhl48 👋 to make our CI pass, you'll have to run `make fix-copies` on your `transformers` folder, and then push the code again. In a nutshell, we have a system in place that ensures that the code in a few models stays the synchronized. `make fix-copies` pushes the changes of the canonical models (such as GPT2) into the others :)<|||||>@yhl48 uhmmm something went wrong. What does your terminal print after running `make fix-copies`?<|||||>``` python utils/check_copies.py --fix_and_overwrite python utils/check_table.py --fix_and_overwrite python utils/check_dummies.py --fix_and_overwrite python utils/check_task_guides.py --fix_and_overwrite ```<|||||>I think the file `transformers/src/transformers/models/decision_transformer/modeling_decision_transformer.py` has to be edited as well. Also because I was working on the main branch locally and pushed to a different branch, so that has caused some confusion with #21773. I should probably keep just one of #21772 and #21773, fixing the two models in one PR.<|||||>@yhl48 since this contains the changes of #21773, I'm going to merge this one and close the other PR :)
transformers
21,771
closed
Chunkable token classification pipeline
This PR improve the TokenClassificationPipeline by extending its usage to tokenized texts longer than `model_max_length` by returning overflowing tokens as chunks rather than truncating texts. To enable the use of this extended feature, you must use a fast tokenizer with an aggregation strategy different to `"none"` and provide a `stride` number. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-23-2023 23:14:05
02-23-2023 23:14:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>For whoever is reading this, I will quickly correct the alignment issues which concerns tokens that are removed due to special tokens mask :)<|||||>cc @Narsil I'll let you review and decide if it maybe would be more suitable as code on the hub or needs to be in Transformers.<|||||>Thank you for this PR. It looks promising. > is now able to process sequences longer than 512. Do you have a specific model in mind, `512` seems oddly specific. > if it maybe would be more suitable as code on the hub or needs to be in Transformers. I think it really depends on the complexity of the resulting code, and the cross over with other parameters. @LucCailliau the tests aren't passing yet. I don't want to do a full review before the tests are passing. Some notes: - Existing tests cannot be modified and must be passing - Overall code should be relatively simple (the current state looks ok on the surface). - The most complex part (I think it should be the conflict resolution in overlapping parts) must be as clear as possible. It's ok to put it into another function for special handling. - Unless it causes an explosion in complexity, it should work with all `aggregation_strategy`. The minimum are `NONE` and `SIMPLE`. If it causes and explosion in complexity, we need to forbid the use of the unsupported combinations - We need good tests for this feature: - Make sure it actually solves what this PR is set to do (handler longer than `model_max_length` inputs`). (Both a slow and fast test) - Check with `aggregation_strategy` parameters. - Check errors if preventing some parameter combo. Does this make sense ? For the sake of getting this moving forward faster I actually suggest splitting this over several PRs. The first PR should be the move to `ChunkPipeline` and nothing else should be modified. Then adding the new parameter. We would need the second one, to be close enough to good state to merge the first (there's no point in changing the pipeline type if we're not able to support this `process_all` parameter correctly. Don't hesitate to ask more question should you need it.<|||||>@Narsil, all tests passed except the code quality. I used black but it doesn't pass. I also update the schema above to explain the algorithm for update/aggregate scores<|||||>> @Narsil, all tests passed except the code quality. I used black but it doesn't pass. I also update the schema above to explain the algorithm for update/aggregate scores try `pip install -e .[quality] && make fixup` to use the correct black version.<|||||>@Narsil, wait, just one more thing to add and we can go<|||||>@Narsil, I checked everything on my side, we can go for it :)<|||||>@Narsil, - I updated the documentation as mentioned - Updated sanitize_parameters, you just have to provide stride now - I left unchanged the forward method as we pass only the tokenizer inputs to the model - Correct spaces for readability - I also provided an example above between the current implementation with this one What do you think about it?<|||||>@Narsil, Finally, it is better to not update the scores and merge results after entities aggregation. Each chunk is entirely pass through the model hence and we aggregate the results. With the sentence: - [...] Paris Hilton [...] We have the corresponding chunks: - [...] Paris - [...] Paris Hilton [...] - Hilton [...] With the following entities: - Paris -> LOC - Paris Hilton -> PER - Hilton -> ORG The first step consist of merging results backward and now entities become: - Paris -> LOC - Paris Hilton -> PER The last step, then merging results forward to get the desired entity: - Paris Hilton -> PER If we found different entities at the same start/end index, we take the longest one and if lengths are equal, we take the highest score. This approach is clearly better. We passed all the tests. I'll start creating a validation set to have results of what we've done.<|||||>> I'll start creating a validation set to have results of what we've done. Sounds great ! Again don't hesitate to ask for resources for larger runs.<|||||>> > I'll start creating a validation set to have results of what we've done. > > Sounds great ! Again don't hesitate to ask for resources for larger runs. Great, do you have a specific model and/or dataset to perform our tests? I am also interested for resources. The tests must be done with `aggregation_strategy` to `"simple"`, `"first"`, `"average"` and `"max"`. If `None` is set we can have the following: "We went to Manhattan Burgers, it was awesome!" - "Manhattan" -> "B-LOC" - "Burgers" -> "I-ORG" And here again we can update scores as we did previously, but this is not perfect. Corrections can only be applied if `aggregation_strategy` different from `None`.<|||||>@Narsil, I finished the tests for this pipeline and the results are convincing since it improves the current implementation. ### A summary of the PR: This PR improve the TokenClassificationPipeline by extending its usage to tokenized texts longer than `model_max_length` by returning overflowing tokens as chunks rather than truncating texts. To enable the use of this extended feature, you must use a fast tokenizer with an aggregation strategy different to `"none"` and provide a `stride` number. ### Approaches: Different approaches have been experimented: 1. Updating tokens scores in overlapping parts. 2. Aggregating entities for all chunks, no matter the overlapping parts. The first approach which consist of updating scores with the highest is not the best. A more "confident" token doesn't mean the more likely it is, adversarial attacks show that plenty. The second approach (selected approach) consist of processing each chunk and aggregate entities no matter overlapping part. In the final aggregation step, we select the best entity in overlapping parts with a rule. We first look at the longest entity and if entities have the same length, we take the entity with the highest score. **Note that taking the best entity first on its length, then on the highest score (if lengths are equal) give better results than just taking the highest score.** Example: Given the following entities from their respective chunk: "New York" -> "LOC": from chunk no. 1 "New York City" -> "LOC": from chunk no. 2 The remaining entity in aggregated entities will be "New York City" -> "LOC" ### Results In order to compare the current implementation with this one, we generated labeled text from the conll2003 dataset (available on the hub). Then, compared the number of exact match and wrong match only in the first chunk for each implementation. You can download the notebook as HTML: [token_classification_comparison.zip](https://github.com/huggingface/transformers/files/10892808/token_classification_comparison.zip) We have in 378 texts (with more than 1 chunk): `aggregation_strategy="simple"` - **12862** exact matches and **2042** wrong matches for the proposed implementation - **12739** exact matches and **2181** wrong matches for the current implementation `aggregation_strategy="first"` - **13083** exact matches and **1415** wrong matches for the proposed implementation - **12984** exact matches and **1478** wrong matches for the current implementation `aggregation_strategy="average"` - **13009** exact matchs and **1390** wrong matches for the proposed implementation - **12921** exact matchs and **1436** wrong matches for the current implementation `aggregation_strategy="max"` - **13037** exact matches and **1399** wrong matches for the proposed implementation - **12944** exact matches and **1453** wrong matches for the current implementation ### Implementation We only changed the implementation from Pipeline to ChunkPipeline. The underlying tests for this pipeline remain the same as functions don't change since the previous implementation. Each chunk is processed individually. Entities are aggregated in a new function called `aggregate_entities`.<|||||>I haven't forgotten this PR, it seems to have some external interest. I wanted to dedicate some good time for a proper review and didn't have a lot. I'm looking at it tomorrow. Thanks for your work ! <|||||>And here are more complete logs so you can inspect a bit more the edgy cases if you want: wikiann all languages x top 5 token classification models Overall it seems good enough to add to me.<|||||>> Do you agree with the proposed test for actually checking how it performs on stride conflicts ? It's a good approach too. It makes more sense since concatenate sentences can lead to new randomly generated sentences. However, with this new script, we need to be careful of: - most of sentences in datasets fit in a sequence of length 50 (`uninteresting` is incremented when it's the case) - we take as references outputs of the pipeline without chunking+striding and not the references from the dataset itself - only one non-detected entity in a sentence with the pipeline chunking+striding will cause an entity offset and lead to wrong results<|||||>> * There's an optional printing to see the different tokens. I looked at it , and didn't see anything shocking, some spans are a bit different, some group are different and without context it's hard to tell who's correct. I noticed the same behavior with the previous tests. In our case, the aim is not to apply corrections on predictions but to aggregate entities. If the model is 100% correct, the aggregated entities will also be 100% correct.<|||||>> And here are more complete logs so you can inspect a bit more the edgy cases if you want: I think you forgot to link the results > wikiann all languages x top 5 token classification models > > Overall it seems good enough to add to me. Yes, with the previous tests and with manual tests on different content, it's working as expected.<|||||>I refactored the code as you mentioned. All is good on my side <|||||>Oops: https://gist.github.com/Narsil/e8609805e8e52c7e4114586eede8a481<|||||>> Oops: https://gist.github.com/Narsil/e8609805e8e52c7e4114586eede8a481 Good results!<|||||>@Narsil, I updated the comments in the Files Changed section that give a better explanation of the `aggregate_overlapping_entities()` method. I don't know if you received a notification for that. Is it good for you?<|||||>I didn't but we're still missing some tests though. I offered to help writing them if you want, but we really want tests to show case this feature (and make sure it doesn't break later). The most important will be small tests (with tiny models) so that they run always. And showcase what's happening on simple strings while setting `model_max_length` to something super tiny to force the chunking to happen.<|||||>I thought it was for @sgugger since you already created a script that shows it works. @Narsil your help is welcome. I have in mind to create specific tests with manually checked references to ensure it doesn't break later.<|||||>No I pinged him because the code was clean enough to be looked at, but the tests (especially for such a complex feature) are mandatory. If you can get started that would be immensely appreciated, but please share if you're having a hard time or don't have the bandwidth to do it. It shouldn't take me that much time but it's always better if you can write the tests yourself. <|||||>Great, I'll do it<|||||>@Narsil, I finished the tests. The selected model is `elastic/distilbert-base-uncased-finetuned-conll03-english` (pytorch_model.bin of size 266MB). It is not a tiny model as you recommend. I tried different tiny models but unfortunately, they don't perform well on our example. In fact, it is not noticeable when running since it requires between 2s and 3s to run all the new tests. The tests are composed in two parts: 1. `test_chunking()`: Test the pipeline with all aggregation strategies and match an hard coding output 2. `test_regular_chunk()`: Test the pipeline with all aggregation strategies and match output (without scores) between regular output (without chunking+striding) and chunked output (with chunking+striding) The second test, `test_regular_chunk()` can be optional since the first, `test_chunking()` was created with the same rule: regular output match chunked output. But, even if the regular output should not be interpreted as a reference, in this case, it is good to show that we didn't create a flawed test. The selected parameters for these tests are: - `model_max_length=10`: extremely tiny to increase the difficulty - `stride=5`: quite large (regarding `model_max_length`) to generate multiple chunks (15 chunks in our example) You can also find below the output without `aggregate_overlapping_entities()` (but sorted by `"start"` for more readability) to see how the tests cover the different overlapping cases with the text: `"Hugging Face, Inc. is a French company that develops tools for building applications using machine learning. The company, based in New York City was founded in 2016 by French entrepreneurs Clément Delangue, Julien Chaumond, and Thomas Wolf."` ![image](https://user-images.githubusercontent.com/74506016/226181016-ef06357d-e143-4b0c-bab8-6697fb1a2b3f.png) And below, the same output with `aggregate_overlapping_entities()`: ![image](https://user-images.githubusercontent.com/74506016/226180986-62b885dd-1319-443c-9259-2983b20f6208.png) Is it good for you?<|||||>Hello @Narsil, I don't know if you didn't receive the update for the tests or if you don't have time to look at it. Do not hesitate if you need additional work to be done<|||||>> @luccailliau I took the liberty of adding directly the small test I had in mind. and pushing the other 2 tests to slow tests (since they use real models). They are good tests btw ! Great! Yes for sure, it is not good to use a real model. I also look at hf-internal-testing model but didn't found hf-internal-testing/tiny-bert-for-token-classification <|||||>@Narsil, thank you very much for your help and your time on this PR, it was a pleasure! @sgugger, I updated your changes, ready to go<|||||>It looks like two of the comments have been resolved without any changes. Did you forget a commit maybe?<|||||>> It looks like two of the comments have been resolved without any changes. Did you forget a commit maybe? Oops, should be ok now<|||||>Perfect, congrats on finishing this big PR!<|||||>Thanks!<|||||>Amazing work, thanks @luccailliau ! <|||||>Great work on this PR @luccailliau, congrats!<|||||>Thanks for the great work ! <|||||>Thank you all!
transformers
21,770
closed
Support LoRA for clip text encoder in diffusers
# What does this PR do? Support a feature in https://github.com/huggingface/diffusers/issues/2469. For now, as stable diffusion uses CLIPTextEncoder, it doesn't support adding LoRA layers yet. What we have done is quite similar to [UNet2DConditionModel](https://github.com/huggingface/diffusers/blob/e5810e686ea4ac499e325c2961808c8972dee039/src/diffusers/models/unet_2d_condition.py#L53). # What to expect after this PR? ``` import torch from transformers import CLIPTextModel, CLIPTokenizer from diffusers.models.cross_attention import LoRACrossAttnProcessor tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") text_encoder = CLIPTextModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="text_encoder") text_encoder.requires_grad_(False) # add LoRA layers lora_attn_procs = {} for name in text_encoder.attn_processors.keys(): cross_attention_dim = None if name.endswith("self_attn.processor") else text_encoder.config.hidden_size hidden_size = text_encoder.config.hidden_size lora_attn_procs[name] = LoRACrossAttnProcessor( hidden_size=hidden_size, cross_attention_dim=cross_attention_dim ) text_encoder.set_attn_processor(lora_attn_procs) inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") outputs = text_encoder(**inputs) # only added LoRA weights require gradients for name, param in text_encoder.named_parameters(): print(name, param.requires_grad) ```
02-23-2023 20:31:40
02-23-2023 20:31:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>The support for LoRA should be done using our new [peft](https://github.com/huggingface/peft) library. We won't change Transformers models directly. cc @pacman100 @patrickvonplaten <|||||>Sure, it makes sense to me. I'm glad to know. I will directly make a new PR in diffusers.
transformers
21,769
closed
[deepspeed tests] fix issues introduced by #21700
https://github.com/huggingface/transformers/pull/21700 changed the default logging level - which broke multiple deepspeed tests. Applying a hack to restore the log-level to how it was before.
02-23-2023 20:15:10
02-23-2023 20:15:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21769). All of your documentation changes will be reflected on that endpoint.
transformers
21,768
closed
Make schedulers picklable by making lr_lambda fns global
# What does this PR do? Make schedulers picklable by making lr_lambda fns global, at the cost of the extra step of arg passing and using `partial`. Closes #21689 Implements the change mentioned in the issue across the following functions in optimizations.py: `get_constant_schedule` `get_constant_schedule_with_warmup` `get_linear_schedule_with_warmup` `get_cosine_schedule_with_warmup` `get_cosine_with_hard_restarts_schedule_with_warmup` `get_inverse_sqrt_schedule` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. [link](https://github.com/huggingface/transformers/issues/21689) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-23-2023 19:13:53
02-23-2023 19:13:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>It may be a good idea to test this feature: In file `tests/optimization/test_optimization.py`: Add class ```python class LambdaScheduleWrapper: """See https://github.com/huggingface/transformers/issues/21689""" def __init__(self, fn): self.fn = fn def __call__(self, *args, **kwargs): return self.fn(*args, **kwargs) @classmethod def wrap_scheduler(cls, scheduler: LambdaLR): scheduler.lr_lambdas = list(map(cls, scheduler.lr_lambdas)) ``` And wrap the schedulers before testing the reload process in `ScheduleInitTest.test_schedulers`: ```python ... scheduler = scheduler_func(self.optimizer, **kwargs) ++ LambdaScheduleWrapper.wrap_scheduler(scheduler) # wrap to test picklability of the schedule lrs_2 = unwrap_and_save_reload_schedule(scheduler, self.num_steps) self.assertListEqual(lrs_1, lrs_2, msg=f"failed for {scheduler_func} in save and reload") ``` <|||||>> It may be a good idea to test this feature: Thank you for filing the issue and sharing this test! I'll leave the decision of whether we should include this test to the maintainers. It may come down to tests being explicitly for functionality as opposed to something like picklability.<|||||>Yes it would be nice to have a test such as the one above.<|||||>Thank you for the comments and sorry for the delay<|||||>Thanks again for your contribution!
transformers
21,767
closed
Fix-ci-whisper
# What does this PR do? Fixes the failing test. It is related to a modificaiton of the configuration on the hub. https://github.com/huggingface/transformers/pull/21307/files#diff-cf6c12f8da48db4d91bcc6db32ecb7c1609a76e30719b5d47cccf595d326d235 already fixed this before, now just enforcing transciption tasks
02-23-2023 18:10:21
02-23-2023 18:10:21
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @ArthurZucker I am running on a GCP VM (which is quite close to the CI runner env.), but I still get the same error. Also this CI failure is shown up on Feb. 22nd, and at that time, I don't see any change on the Hub reop `openai/whisper-large`. The latest change is only 15-17 ago. Could you double check the cause and the fix, thank you. Or let me know if I miss something.<|||||><img width="1615" alt="image" src="https://user-images.githubusercontent.com/48595927/221133297-dd977d13-79f3-48d7-a121-3c25220730fb.png"> The tokenizers where properly modified<|||||>I ran the tests locally and they passed so not really sure what is happening. Also let's add the TF_call fix for doc daily. It is probably related to a TF PR <|||||>I got the 3rd id is different in `generated_ids` and `EXPECTED_LOGITS` when running this PR. I will check on the acutal CI runner. ```bash (Pdb) generated_ids tensor([[50258, 50259, 50359, 50363, 2221, 13, 2326, 388, 391, 307, 264, 50244, 295, 264, 2808, 5359, 293, 321, 366, 5404], [50258, 50259, 50359, 50363, 6966, 307, 2221, 13, 2326, 388, 391, 311, 9060, 1570, 1880, 813, 702, 1871, 13, 50257], [50258, 50259, 50359, 50363, 634, 5112, 505, 300, 412, 341, 42729, 3196, 295, 264, 1064, 11, 365, 5272, 293, 12904], [50258, 50259, 50359, 50363, 634, 575, 12525, 22618, 1968, 6144, 35617, 20084, 1756, 311, 589, 307, 534, 10281, 934, 439]]) (Pdb) EXPECTED_LOGITS tensor([[50258, 50259, 50358, 50363, 2221, 13, 2326, 388, 391, 307, 264, 50244, 295, 264, 2808, 5359, 293, 321, 366, 5404], [50258, 50259, 50358, 50363, 6966, 307, 2221, 13, 2326, 388, 391, 311, 9060, 1570, 1880, 813, 702, 1871, 13, 50257], [50258, 50259, 50358, 50363, 634, 5112, 505, 300, 412, 341, 42729, 3196, 295, 264, 1064, 11, 365, 5272, 293, 12904], [50258, 50259, 50358, 50363, 634, 575, 12525, 22618, 1968, 6144, 35617, 20084, 1756, 311, 589, 307, 534, 10281, 934, 439]]) (Pdb) ```<|||||>Okay, checking! I was wrong, the task should be `translate` and not `transcribe`. But the default in the generation config is `translate`. So I am not sure I understand, this should not be failing. The docstest is failing because of a recent change in the default arguments handled in the TF generation function see #21580 and #21525<|||||>doc test and related slow test both pass locally cc @ydshieh <|||||>> But the default in the generation config is translate. So I am not sure I understand, this should not be failing. when I run the test and check the generation config in modeling forward, generation config has no task attribute, and `task` argument passed is also `None`. I think it explains things
transformers
21,766
closed
Add Mega: Moving Average Equipped Gated Attention
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #19982 This pull request adds [Mega: Moving Average Equipped Gated Attention](https://arxiv.org/abs/2209.10655), which is the current leader of the [LRA benchmark](https://paperswithcode.com/sota/long-range-modeling-on-lra). Adapted from the original [fairseq-based repo](https://github.com/facebookresearch/mega) and used a MLM checkpoint I created using the original implementation on the wikitext-103 dataset. There is no proposed Mega tokenizer, so I used the RoBERTa tokenizer which I used on the wikitext checkpoint. The proposed implementation works in encoder and decoder settings, and all relevant tests are passing. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker and @younesbelkada for text models; tagging @NielsRogge for visibility as he responded to the original issue. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-23-2023 18:04:04
02-23-2023 18:04:04
_The documentation is not available anymore as the PR was closed or merged._<|||||>Sorry for the initial test failures! It should be taken care of now. Also I wanted to point out that I did not have access to a GPU while developing this, so I was not able to test on a GPU<|||||>Hi @mnaylor5 Thanks for your great work on this! Let us know when do you think this is ready for review 💪 <|||||>Thank you @younesbelkada! It is ready for review now 😄 <|||||>Hi @younesbelkada / @ArthurZucker - just checking in to see if there is anything you need from me before reviewing this pull request. Looking forward to being able to use Mega in `transformers`!<|||||>Hey! I'll give you a review tomorrow! Sorry for the wait, had to synch with @younesbelkada on this one<|||||>Thanks @ArthurZucker, and no worries! 😄 <|||||>Thanks for the review @ArthurZucker! I'll reply to individual comments where I can clear things up, and I'll accept your suggestions wherever I can. I'll probably be able to start on the modifications later today, and if not, then early next week.<|||||>Alright @ArthurZucker this should be good to review again! The biggest updates in this version are removing the `reset_parameters` methods in favor of `_init_weights`, renaming variables/comments to avoid single-letter names, docstring format updates, and renaming `Mega` to `MEGA` based on your suggestion. I have resolved the comments where I made the changes, and left the other comments in place for continued discussion. Thanks again for your feedback, and I'm happy to answer any questions that arise. Looking forward to getting MEGA into the library! 🚀 <|||||>Hi there @ArthurZucker - thanks again for the feedback in your previous review. Just reaching out to see if anything else is needed before reviewing and hopefully merging!<|||||>Hey! Sorry I must have missed your previous ping! Will review now!<|||||>Thanks @ArthurZucker! I appreciate the quick review and the encouragement 😄 I added a couple of questions where things weren't totally clear to me, but I can get started on everything else now. I'm really excited about getting this model into the library, and hopefully there won't be too many more changes required!<|||||>Will answer to your questions tomorrow! <|||||>Alright @ArthurZucker, I think that's everything except the threads with ongoing discussion. I'm super happy with how this is shaping up! In the latest batch of commits: * Renamed classes, variables, and params based on comments (mainly in EMA and MovingAverageGatedAttention class) * Rearranged positional bias, normalization functions, activation functions, dropout classes * Added the `copied from comments` where requested * Added token type ID buffer * Added tests for generation and sequence classification * Moved FFT convolution into a reusable method with additional documentation * Addressed merge conflicts from LLaMA 🦙 Thanks for the feedback and I'll wait on any more changes until you get a chance to review the updates and resolve the open discussions. Excited to get up and running with MEGA in `transformers` 🚀 🤗 <|||||>@ArthurZucker as an update, it looks like the fix for left-padding is going to be a more significant effort to implement -- the relative bias is applied in the attention function, and it expects all of the inputs to be left-to-right starting at position 0. We can probably refactor to accept the position IDs like they did for CodeGen, but we'll also need to change how the bias is added since it is currently using a single `(seq_len, seq_len)` tensor for the entire batch. Refactoring that might be the heavier lift, but I'm still exploring. I'll dig more into this tomorrow, but for the meantime, I've pushed updates that address the rest of your comments! If you have any other suggestions on the fix for relative positions, I'd love to hear them! 😄 <|||||>Sure! Also it's not that important to have left padding in this PR, can be added in another PR! <|||||>Thanks @ArthurZucker! After digging into it, I do think it will require a pretty significant refactor to support left-padding in this PR. If you're comfortable with it, I agree that it could make sense in a new PR. I just added an entry in the `MegaBlock` docstring for the new `causal_mask` coming from the pretrained model's method, and added a missing `device` for the token type IDs. Also pulled latest changes from `main` to hopefully prevent whatever was causing the tests for exotic models to fail. I'm really happy with how this is looking, so let me know if there's anything else needed to move forward with this PR! Appreciate your comments and guidance on everything so far! :rocket:<|||||>Awesome, it's alright with me to leave this to another PR. Will do my final review before pinging @sgugger for another pair of eyes! <|||||>Thanks again @ArthurZucker and @sgugger! Appreciate the feedback, and it should all be addressed in the latest changes 🤗 <|||||>Great working with you @mnaylor5 ! Congrats again on the merge 🔥 <|||||>Congrats @mnaylor5 ! Feel free to share on social media and we'll amplify your post<|||||>Thanks so much @ArthurZucker and @NielsRogge! I learned a ton through this process, and it's so rewarding to see my code in a library I use so much :heart: I posted something here on LinkedIn a couple days ago - I'll tag you guys in the comments as well! https://www.linkedin.com/posts/mitchnaylor_mega-activity-7045103140890660864-9VOU
transformers
21,765
closed
[Flax] Fix erroneous kwargs being passed to generate config
# What does this PR do? Setting the `dtype` with Flax `.from_pretrained` is throwing a `TypeError`: ```python from transformers import FlaxAutoModelForSpeechSeq2Seq import jax.numpy as jnp model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained("openai/whisper-tiny.en", dtype=getattr(jnp, "float32")) ``` <details> <summary> Traceback </summary> ```python File "<string>", line 1, in <module> File "/Users/sanchitgandhi/transformers/src/transformers/models/auto/auto_factory.py", line 471, in from_pretrained return model_class.from_pretrained( File "/Users/sanchitgandhi/transformers/src/transformers/modeling_flax_utils.py", line 955, in from_pretrained model.generation_config = GenerationConfig.from_pretrained( File "/Users/sanchitgandhi/transformers/src/transformers/generation/configuration_utils.py", line 539, in from_pretrained return cls.from_dict(config_dict, **kwargs) File "/Users/sanchitgandhi/transformers/src/transformers/generation/configuration_utils.py", line 575, in from_dict logger.info(f"Generate config {config}") File "/Users/sanchitgandhi/transformers/src/transformers/generation/configuration_utils.py", line 313, in __repr__ return f"{self.__class__.__name__} {self.to_json_string()}" File "/Users/sanchitgandhi/transformers/src/transformers/generation/configuration_utils.py", line 649, in to_json_string return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" File "/opt/homebrew/Cellar/[email protected]/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/__init__.py", line 234, in dumps return cls( File "/opt/homebrew/Cellar/[email protected]/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 201, in encode chunks = list(chunks) File "/opt/homebrew/Cellar/[email protected]/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 431, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/opt/homebrew/Cellar/[email protected]/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/opt/homebrew/Cellar/[email protected]/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 438, in _iterencode o = _default(o) File "/opt/homebrew/Cellar/[email protected]/3.9.13_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type _ScalarMeta is not JSON serializable ``` </details> It looks like the dtype arg is erroneously being forwarded to the generation config via `**kwargs`. This line appears to be the culprit, where we point `kwargs` to `model_kwargs`: https://github.com/huggingface/transformers/blob/0ffa22f9f6662ec9a0b6b6225bf152d32ab3e151/src/transformers/modeling_flax_utils.py#L656 And then append `dtype` to `model_kwargs`: https://github.com/huggingface/transformers/blob/0ffa22f9f6662ec9a0b6b6225bf152d32ab3e151/src/transformers/modeling_flax_utils.py#L661-L662 The `dtype` then gets silently forwarded to generate config via `**kwargs`: https://github.com/huggingface/transformers/blob/0ffa22f9f6662ec9a0b6b6225bf152d32ab3e151/src/transformers/modeling_flax_utils.py#L967 This PR simply sets `model_kwargs` as a copy of `kwargs` to avoid this.
02-23-2023 16:34:09
02-23-2023 16:34:09
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,764
open
[Flax Examples] Seq2Seq ASR Fine-Tuning Script
# What does this PR do? Can be used to fine-tune Flax Whisper for speech recognition. Tested and verified as working with the following (dummy) config: ``` run_flax_speech_recognition_seq2seq.py \ --model_name_or_path openai/whisper-tiny.en \ --dataset_name hf-internal-testing/librispeech_asr_dummy \ --dataset_config clean \ --train_split_name validation \ --eval_split_name validation \ --output_dir whisper-tiny-ft-dummy \ --overwrite_output_dir \ --num_train_epochs=2 \ --max_train_samples 10 \ --max_eval_samples 10 \ --warmup_steps=8 \ --do_train \ --do_eval \ --learning_rate=2e-4 \ --per_device_train_batch_size=2 \ --per_device_eval_batch_size=1 \ --predict_with_generate ``` Will add a README with preliminary training configs / results later this week after doing a full fine-tuning run. cc @peregilk @andyehrenberg for interest
02-23-2023 16:24:26
02-23-2023 16:24:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21764). All of your documentation changes will be reflected on that endpoint.<|||||>@sanchit-gandhi @andyehrenberg We have made a version of this script will support streaming and training on the TPU pods. The current version of the script is available here: [https://github.com/NbAiLab/nb-whisper/blob/main/run_flax_speech_recognition_seq2seq_streaming.py](https://github.com/NbAiLab/nb-whisper/blob/main/run_flax_speech_recognition_seq2seq_streaming.py) We are however struggling with a bug at the moment. The script seems to work for training the Tiny models on multiple pod sizes. Both for scaling for speed and for increasing the batch size. All the other model sizes (small, base, medium, large) also works on the single TPU v4-8. However, training on the non-Tiny-model sizes runs for a few steps then freezes. If anyone have any idea about this could be happening, I really appreciate it.
transformers
21,763
closed
Ability to specify certificate for running language training modules (e.g. run_mlm.py)
### Feature request Add a command line param in training modules, e.g., run_mlm.py to take in a custom certificate file for the requests module. ### Motivation When running behind a VPN, the training modules e.g., run_mlm.py script gives SSL errors when trying to connect to URLs, e.g., to download a pre-trained model. It gives an error like this: urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /roberta-base/resolve/main/config.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)'))) This seems to be caused when the code from a device that is behind a VPN, because the VPN connection uses a custom root cert. ### Your contribution I was able to resolve the issue by specifying a certificate explicitly in requests.get() in _http.py, as follows: ca = "company_root_ca_exported_from_my_browser" then modified the following line: response = requests.request(method=method, url=url, **kwargs) to this: response = requests.request(method=method, url=url, verify=ca, **kwargs)
02-23-2023 15:30:58
02-23-2023 15:30:58
Examples are just examples. They can't contain 1,000 arguments to support every user's need without becoming unreadable. In this instance, you should just change the example to suit your need.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,762
closed
[time series] updated expected values for integration test.
# What does this PR do? Updated the integration test expected values. cc @ydshieh
02-23-2023 13:14:56
02-23-2023 13:14:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,761
closed
AttributeError: type object has no attribute 'forward'
### System Info - `transformers` version: 4.21.1 - Platform: Linux-5.4.0 - Python version: 3.8.8 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker and @gante ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction ```py from transformers import BertConfig, TFBertForSequenceClassification, AutoTokenizer, BertConfig import tensorflow as tf class CustomTFModel(TFBertForSequenceClassification): def __init__(self, config: BertConfig, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') conf = BertConfig.from_pretrained('bert-base-uncased') model = CustomTFModel(conf) #model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased') labels = tf.convert_to_tensor([0,1,2]) ds = tf.data.Dataset.from_tensor_slices({'input_ids':model.dummy_inputs['input_ids'],'labels': labels}).batch(1) model.compile( tf.keras.optimizers.Adam(learning_rate=3e-5), metrics='accuracy' ) model.fit( ds, epochs=1, shuffle=True, verbose=1, ) ``` ### Expected behavior when I execute this code I get the error: ```bash File "issue.py", line 21, in <module> model.fit( File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler raise e.with_traceback(filtered_tb) from None File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py", line 1147, in autograph_handler raise e.ag_error_metadata.to_exception(e) AttributeError: in user code: File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/keras/engine/training.py", line 1021, in train_function * return step_function(self, iterator) File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/keras/engine/training.py", line 1010, in step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/keras/engine/training.py", line 1000, in run_step ** outputs = model.train_step(data) File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1353, in train_step label_kwargs = find_labels(self.__class__) File "/home/ubuntu/anaconda3/envs/tf2_mattia/lib/python3.8/site-packages/transformers/utils/generic.py", line 309, in find_labels signature = inspect.signature(model_class.forward) AttributeError: type object 'CustomTFModel' has no attribute 'forward' ``` I would expect that the `type object 'CustomTFModel' has no attribute 'forward'` error would not be generated as the `CustomTFModel` class should be identical to `TFBertForSequenceClassification` or am I wrong? if you use `TFBertForSequenceClassification` instead (the commented line) the error is not triggered.
02-23-2023 13:12:32
02-23-2023 13:12:32
Hey! Thanks for posting. You meed to re-define the `forward` method. The following script works as expected ```python from transformers import BertConfig, TFBertForSequenceClassification, AutoTokenizer, BertConfig import tensorflow as tf class CustomTFModel(TFBertForSequenceClassification): def __init__(self, config: BertConfig, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) def forward(self, **kwargs): super().call(**kwargs) tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') conf = BertConfig.from_pretrained('bert-base-uncased') model = CustomTFModel(conf) #model = TFBertForSequenceClassification.from_pretrained('bert-base-uncased') labels = tf.convert_to_tensor([0,1,2]) ds = tf.data.Dataset.from_tensor_slices({'input_ids':model.dummy_inputs['input_ids'],'labels': labels}).batch(1) model.compile( tf.keras.optimizers.Adam(learning_rate=3e-5), metrics='accuracy' ) model.fit( ds, epochs=1, shuffle=True, verbose=1, ) ``` (that is a quick fix, for TF it should be checking for a `call` method). We check whether the model name starts with `TF`. So `TFCustomModel` will also work without redefining the function. ```python class TFCustomModel(TFBertForSequenceClassification): def __init__(self, config: BertConfig, *inputs, **kwargs): super().__init__(config, *inputs, **kwargs) ``` This probably was not clear so we might need to update the doc @gante <|||||>Hey! Thank you very much for your quick reply. Now it is working correctly. I was using the quick fix you proposed too, but I noticed that during training the metrics that are passed inside `model.compile` are not evaluated. I don't know if this is a problem on your side or if it is due to tensorflow, I tried to find out what is the problem but I couldn't understand it.<|||||>Hey all! Technically, this problem __can__ be solved without documentation (which is tricky to sort for this particular problem). If we replace the class name check with an inheritance check (e.g. [here](https://github.com/huggingface/transformers/blob/0ffa22f9f6662ec9a0b6b6225bf152d32ab3e151/src/transformers/utils/generic.py#L392)), any name can be given to downstream classes. In terms of code, `model_name.startswith("TF")` would become `"transformers.modeling_tf_utils.TFPreTrainedModel" in str(inspect.getmro(model_class))`. However, to ensure correctness, we would need to trigger a series of changes, as several places in our codebase rely on class names prefixed with the framework. Since this is my first time seeing this problem, I'll err toward no change. Nevertheless, I wanted to leave this comment here for future reference :) cc @sgugger <|||||>I think it would be great to use a class check instead, maybe checking if we inherit from a keras model or an nn module?
transformers
21,760
closed
ImportError: Blip2ForConditionalGeneration
### System Info I am trying to run Image captioning using `BLIP2` using the steps mentioned in [Link](https://huggingface.co/blog/blip-2#using-blip-2-with-hugging-face-transformers). However, there seems to be import error for `BLIP2ForConditionalGeneration` `transformers version: 4.27.0.dev0` Issue ```bash Traceback (most recent call last): File "/home/aayush/ControlNet/blip2_captions_test.py", line 4, in <module> from transformers import AutoProcessor, BLIP2ForConditionalGeneration ImportError: cannot import name 'Blip2ForConditionalGeneration' from 'transformers' (/home/aayush/miniconda3/envs/diffusers/lib/python3.10/site-packages/transformers/__init__.py) ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. pip install transformers 2. Run the following snippet ``` from PIL import Image from transformers import AutoProcessor, Blip2ForConditionalGeneration device = "cuda" if torch.cuda.is_available() else "cpu" processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16) ``` ### Expected behavior It should have started downloading the blip-2 models locally
02-23-2023 11:58:14
02-23-2023 11:58:14
Hi, thanks for using BLIP-2! It's Blip2ForConditionalGeneration, not BLIP2ForConditionalGeneration ;)<|||||>@NielsRogge Same issue changing to `Blip2ForConditionalGeneration` ```bash File "/home/aayush/ControlNet/blip2_captions_test.py", line 4, in <module> from transformers import AutoProcessor, Blip2ForConditionalGeneration ImportError: cannot import name 'Blip2ForConditionalGeneration' from 'transformers' (/home/aayush/miniconda3/envs/diffusers/lib/python3.10/site-packages/transformers/__init__.py)) ```<|||||>@NielsRogge I installed `transformers` from source now ```bash git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` instead of `pip install git+https://github.com/huggingface/transformers` or `pip install transformers`. It works now! <|||||>Ok, I thought you were already using that since you mentioned transformers version: 4.27.0.dev0. Feel free to close this issue if it's resolved :)<|||||>I installed from the source but still getting the same error. Any update here? <|||||>@minarainbow How did you installed from source? This way (worked!) ``` git clone https://github.com/huggingface/transformers.git cd transformers pip install -e . ``` or (error) ``` pip install git+https://github.com/huggingface/transformers `` <|||||>Best is to do: ``` pip install git+https://github.com/huggingface/transformers.git@main ```
transformers
21,759
closed
Generate - update cookie cutters to not initialize cache with training and gradient checkpointing
# What does this PR do? Add the changes in #21733 to the cookie-cutter files. The other modeling files are being tracked in the following "Good First Issue": #21737
02-23-2023 11:37:28
02-23-2023 11:37:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,758
closed
[WIP] pass kwargs to config
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/21757 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger @Narsil PretrainedConfig related
02-23-2023 10:50:30
02-23-2023 10:50:30
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21758). All of your documentation changes will be reflected on that endpoint.<|||||>👍 Anyway, wasn't expecting changing something as fundamental as `.from_pretrained` to be reasonable or easy 😅 @sgugger While working on this I realized a couple of things. Will make separate PRs for them if need be - pruned_heads key values should be checked to be type int before casting + error message. there was also a test that used "a" as pruned_head but wasn't failing. will look into why later. - Some models' configs use `initializer_range` some use `init_std`. For example while `FlaubertConfig` doesn't have `initializer_range` but tests in FlaubertModelTester pass `initializer_range` and not `init_std`... These keys don't seem to defined int the `attribute_map` either. So should probably look into those. Having fun figuring out how `from_pretrained` magic works<|||||>The pruned head fix is a welcome one. As I've said before (and as you can see from all the failing tests), you cannot change the logic inside the pretrained config like this without breaking many things in the library.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,757
closed
Custom kwargs not passed when extending PretrainedConfig class
### System Info - `transformers` version: 4.26.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.9.12 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.0.dev20220925 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger @Narsil ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I want to load existing model's configs with `.from_pretrained` but also want to pass my own kwargs `moo` & `boo`. I'm extending `PretrainedConfig` like below: ```python from transformers import PretrainedConfig class MyClassificationConfig(PretrainedConfig): def __init__(self, moo='poo', boo=5, **kwargs): print(boo) # prints 5 because boo doesn't get passed print(kwargs) # do custom calculations and set some custom config values here super().__init__(**kwargs) MyClassificationConfig.from_pretrained('google/canine-s',# example model, any other is same id2label={1:"g",2:"b"}, moo="poo", boo="hoo") ``` only predefined `id2label,label2id,num_classes` values get updated in the config. Happens [here](https://github.com/huggingface/transformers/blob/78a93d17c0e0bca0bc4477e0ee362a95d79f9b22/src/transformers/configuration_utils.py#L702). the custom `moo` and `poo` param doesn't get passed to `MyClassificationConfig`. Because kwargs don't get passed [here](https://github.com/huggingface/transformers/blob/78a93d17c0e0bca0bc4477e0ee362a95d79f9b22/src/transformers/configuration_utils.py#L696) Results in `moo` & `boo` argument values not changing from the default. Since kwargs are optional can result in silent errors where you are actually using default values while thinking you are passing values! I think this is a bug. But if it is intentional maybe nice to warn the user so there are no silent errors ### Expected behavior Should be able to extend PretrainedConfig class allowing custom kwargs.
02-23-2023 10:47:04
02-23-2023 10:47:04
This is intended. Hadding the kwargs is done [here](https://github.com/huggingface/transformers/blob/78a93d17c0e0bca0bc4477e0ee362a95d79f9b22/src/transformers/configuration_utils.py#L712) but to filter out the value that have nothing to do in the config, we detect whether they are attribute of the config or not. So you should change your code like this: ```py class MyClassificationConfig(PretrainedConfig): def __init__(self, moo='poo', boo=5, **kwargs): print(boo) # prints 5 because boo doesn't get passed print(kwargs) # do custom calculations and set some custom config values here self.moo = moo self.poo = poo super().__init__(**kwargs)<|||||>this is saving the custom `moo`, `boo` to the returned config, `config = MyClassificationConfig.from_pretrained("hf-internal-testing/config-no-model",boo="hoo")` but I still can't access the kwarg value I set inside the `__init__` to do some calculations. ~~Is there a post config init function I can override?~~ I guess I can do it at call.<|||||>Also I'm a bit confused about the following behaviour (example): Even though the DinatConfig class doesn't take as a param `image_size` and thus doesn't do `self.image_size=image_size` anywhere if I pass `config=DinatConfig(...,image_size=128,...)` this (imagined) image_size param becomes part of the config(because super() assigns them) `DinatConfig(image_size=5).image_size` which can be good if you want to save some stuff along in your config but isn't it also prone to confusion say that I confuse config key names? Also if I can just pass what ever kwarg and it becomes part of config then do I really need to extend the PretrainedConfig class if I'm just assigning passed kwargs to self 🤔 I guess just to assign default values.<|||||>Sorry you actually don't need to set them as attributes, you just need to pass them along to the super method: ``` class MyClassificationConfig(PretrainedConfig): def __init__(self, moo='poo', boo=5, **kwargs): print(boo) # prints 5 because boo doesn't get passed print(kwargs) # do custom calculations and set some custom config values here super().__init__(moo=moo, boo=boo, **kwargs) ``` will have the right value set in the config.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,756
closed
[Examples] Generalise run audio classification for log-mel models
# What does this PR do? Currently, `run_audio_classification.py` hard-codes the model input name to `input_values` in the pre-processing function. This makes it compatible with Wav2Vec2-style CTC models, but not other speech models that use log-mel `input_features` (e.g. Whisper or AST). We adopt the same strategy that we use in `run_speech_recognition_seq2seq.py` and set this to the correct model input name (based on the feature extractor's attribute `.model_input_names`): https://github.com/huggingface/transformers/blob/1d4b79785263077f9f09ddde5a75ae4f116e85d7/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py#L419
02-23-2023 10:46:22
02-23-2023 10:46:22
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,755
closed
Something confused me about Nystromformer
### System Info Any GPU Machine with Transformer 4.26.0 ### Who can help? @ArthurZucker @younesbelkada @sgugger @novice03 ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction The parameter `segment-means-seq-len` in Nystromformer config is set to be 64, and is equal to another parameter num_landmarks (64). [refer to code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/nystromformer/configuration_nystromformer.py#L106) But if they are equal, the Nystromformer will perform a O(n2) attention like bert, not the nystrom-attention proposed in original paper: https://arxiv.org/abs/2102.03902. [refer to code](https://github.com/huggingface/transformers/blob/main/src/transformers/models/nystromformer/configuration_nystromformer.py#L106) Through experimentation and anslysis, I think the parameter `segment-means-seq-len` should be the length of the tokenized input sequence. It should not be set to 64, if you set to be 64,It means you wanna use O(n2) attention,not nystrom attention. So, there is a problem with the code, or is my understanding wrong? Addtionally, whether the model weight [w-madison/nystromformer-5](https://github.com/huggingface/transformers/blob/main/src/transformers/models/nystromformer/configuration_nystromformer.py#L24) is trained with O(n2)? if yes, whether the modlel weight will not run with nystrom-attention, it need to be pretrain with nystrom-attention? ### Expected behavior The parameter `segment-means-seq-len` is set to be the real tokenized sequence length, so the nystrom-attention can be used to train or inference.
02-23-2023 10:35:47
02-23-2023 10:35:47
I'll ping @novice03 here as he's an expert on Nyströmformer <|||||>Hello @1649759610, thank you for making this post. It looks like this is indeed an issue in the code that I might have overlooked. You are correct that `segment-means-seq-len` should be set to the length of the input. If I were to fix it, I would just remove the `segment-means-seq-len` parameter and set `self.seq_len` in the model to `config.max_position_embeddings`. I think a pull request would have to be made to make these changes. I am also guessing that the tests need to be changed accordingly. However, regarding the checkpoints, they were trained with Nystrom attention and not O(n^2) attention. This is just an issue in the HuggingFace implementation. So, they need not be re-trained. <|||||>@1649759610 Would you like to make a PR about this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,754
closed
[Whisper] Add model for audio classification
# What does this PR do? Adds `WhisperForAudioClassification`: the Whisper encoder model with a sequence classification head on top for audio classification tasks. With the changes implemented in #21756, Whisper can be fine-tuned for any audio classification task. Results of fine-tuning for suggest that this is an extremely promising approach for audio classification. On the FLEURS language identification task, fine-tuning Whisper medium achieves an accuracy of **88%**, beating previous SoTA by 10% and the zero-shot Whisper model by 24% absolute: <img width="614" alt="Screenshot 2023-02-28 at 09 37 27" src="https://user-images.githubusercontent.com/93869735/223370179-338e1f12-7793-48e5-9715-66d7f9da4af3.png"> See logs at [sanchit-gandhi/whisper-medium-fleurs-lang-id](https://huggingface.co/sanchit-gandhi/whisper-medium-fleurs-lang-id) for details.
02-23-2023 10:23:10
02-23-2023 10:23:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>(Failing test is unrelated)
transformers
21,752
closed
Different behavior in DistilBERT when using "inputs_embeds"
Fixes #21089 # What does this PR do? The behavior of the DistillBert model is wrong when the input embeddings are provided, and does not align with Bert or other models. This is due to the `Embeddings` layer that computes internally the addition of the positional embedding and word embedding. If the input embeddings are passed, they are not additioned with the positional embedding internaly.
02-23-2023 08:23:31
02-23-2023 08:23:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @VictorSanh in case there was a specific reason behind this? <|||||>changes look good to me, thanks Arthur! no specific reason behind it. my vague recollection is that `input_embeds` didn't exist when i first implemented it, so i would wait for core maintainer (or whatever the process is for transformers) to validate!
transformers
21,751
closed
add BioGptForSequenceClassification
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/21530 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @NielsRogge @GuillemGSubies
02-23-2023 07:52:31
02-23-2023 07:52:31
transformers
21,750
closed
typos in french documentation
# What does this PR do? Fix a few typos in the french documentation. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Documentation: @sgugger, @stevhliu and @MKhalusova
02-23-2023 06:31:33
02-23-2023 06:31:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,749
closed
[WIP] A potential fix for `AutoConfig` and `AutoModel`
# What does this PR do? ⚠️ This is just to demonstrate the issue and a (maybe super wrong) way to fix it: - Maybe the usage of such models is very low - not worth the time - I can't find `DecisionTransformerGPT2Model` on the Hub. - `nllb` is not a real issue, as it contains only tokniezer, which `AutoTokenizer` works well. - Should we instead creating new configuration classes and associate them to some `model_type` shown below? - The current fix is kind hacky, and it also prevents obtaining the complete list in the different `XXX_MODEL_MAPPINGS`. ## Issue Some keys in `MODEL_MAPPING_NAMES` are not in `CONFIG_MAPPING_NAMES`. Here are 2 examples - `decision_transformer_gpt2` (associate to model class `DecisionTransformerGPT2Model`) - `nllb` (associate to model class `M2M100Model`) When we have a configuration with these `model_type`: - saving the configuration won't save the specified `model_type` - so loading will load the **wrong** (in some sense) `model_type` in the configuration, and therefore using the wrong model class to load the model checkpoint (in some case). The following code snippet shows the problems ### Code snippet #### This shows the model type `decision_transformer_gpt2` is not saved ```python import os import json import tempfile from transformers import DecisionTransformerConfig, AutoConfig config = DecisionTransformerConfig() # originally being `decision_transformer` print(config.model_type) config.model_type = "decision_transformer_gpt2" # become `decision_transformer_gpt2` print(config.model_type) with tempfile.TemporaryDirectory() as tmpdir: config.save_pretrained(tmpdir) # check what is saved with open(os.path.join(tmpdir, "config.json")) as fp: config_dict = json.load(fp) # this should be `"decision_transformer_gpt2"`, but we get `decision_transformer` print(config_dict["model_type"]) auto_config = AutoConfig.from_pretrained(tmpdir) # this should be `"decision_transformer_gpt2"`, but we get `decision_transformer` print(auto_config.model_type) assert auto_config.model_type == "decision_transformer_gpt2" ``` #### This shows `AutoModel` loads the model checkpoint with the wrong model class ``` import tempfile from transformers import DecisionTransformerConfig, AutoModel, DecisionTransformerGPT2Model config = DecisionTransformerConfig() # originally being `decision_transformer` print(config.model_type) config.model_type = "decision_transformer_gpt2" # become `decision_transformer_gpt2` print(config.model_type) # create a model with type `DecisionTransformerGPT2Model` model = DecisionTransformerGPT2Model(config) with tempfile.TemporaryDirectory() as tmpdir: model.save_pretrained(tmpdir) auto_model = AutoModel.from_pretrained(tmpdir) # this should be `"decision_transformer_gpt2"`, but we get `decision_transformer` print(auto_model.config.model_type) # this should be `"DecisionTransformerGPT2Model"`, but we get `DecisionTransformerModel` print(auto_model.__class__.__name__) assert auto_model.__class__.__name__ == "DecisionTransformerGPT2Model" ``` Enhance `AutoConfig`
02-23-2023 06:19:02
02-23-2023 06:19:02
_The documentation is not available anymore as the PR was closed or merged._<|||||>Close as this is not an issue.
transformers
21,748
closed
"Emotion English DistilRoBERTa-base" - Inference API not loading model
### System Info The Inference API does not load this specific model: "Emotion English DistilRoBERTa-base". It keeps returning a 503 error and timing out (both when I try to make a request locally through my webapp and on the huggingface website). Other models seem to be loading fine. This is very strange because things were working fine for the past month. I am still well below my request and character limits too. Occasionally it does work properly but this is very rare. <img width="515" alt="Screen Shot 2023-02-22 at 8 53 38 PM" src="https://user-images.githubusercontent.com/56925074/220805577-bb4eb002-68c0-41b9-9392-e0d43f2dbe44.png"> ### Who can help? @ArthurZucker @sgugger @Narsil ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction async function query(data) { const response = await fetch( "https://api-inference.huggingface.co/models/j-hartmann/emotion-english-distilroberta-base", { headers: { Authorization: "Bearer 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" }, method: "POST", body: JSON.stringify(data), } ); const result = await response.json(); return result; } ### Expected behavior For the model to load and return my results. I modify and log the Json result elsewhere in my code but that is not relevant to this problem so I didn't provide it.
02-23-2023 02:08:48
02-23-2023 02:08:48
cc @Narsil <|||||>We've been having issues today. Thanks for reporting. Things should be slowly getting back to normal.<|||||>Closing as everything should be back online.
transformers
21,747
closed
Slow decoding with many special tokens in vocabulary
### System Info present across multiple versions ### Who can help? @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import T5Tokenizer from time import time from random import randint t1 = T5Tokenizer.from_pretrained('t5-base') t2 = T5Tokenizer.from_pretrained('t5-base', extra_ids=2000) to_decode = [randint(0, 32000) for i in range(10000)] start = time() t1.decode(to_decode) print("few special tokens:", time() - start) start = time() t2.decode(to_decode) print("many special tokens:", time() - start) ``` ### Expected behavior The slowdown should not be so drastic. The cause is an inefficient implementation of [`all_special_ids`](https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py#L1293) and [`all_special_tokens`](https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py#L1267). Additionally, generating them on the fly incurs a large overhead since this attribute is queried for every id to be decoded ([here](https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils.py#L907) and [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/t5/tokenization_t5.py#L324)).
02-22-2023 22:08:13
02-22-2023 22:08:13
Slow tokenizers are... slow. That's why we wrote the tokenizers library ;-) Why not use `T5TokenzierFast` which doesn't have the same problem?<|||||>`T5TokenzierFast` does not have byte-fallback + why artificially handicap the slow tokenizer if it could be more efficient (using sets instead of lists and computing the attribute only when it's updated)?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,746
closed
Regarding Ragend2endRetriever
Unable to Change the Generator to T5ForConditionalGeneration. Shows error.raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'T5ForConditionalGeneration' object has no attribute 'rag' --model_name_path t5-base --model_type t5 @shamanez
02-22-2023 21:23:48
02-22-2023 21:23:48
transformers
21,745
closed
Tokengt for Graph Classification
# What does this PR do? replaces #21098 Adds the TokenGT model for graph classification in Transformers. Done: - [x] Architecture ported - [x] Collator (the model has no tokenizer) and preprocessing Todo: - [ ] Test results against original implementation, to make sure they are within precision range. - [ ] Add checkpoints and make sure they load properly - [ ] Update doc - [ ] Update test suite - [ ] Add model card for the checkpoints once added ## Dependencies Cython, like Graphormer, and einops. Linked to #21079 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. (Discussed on Slack) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Not tagging anyone for now as this is a draft.
02-22-2023 19:49:49
02-22-2023 19:49:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21745). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Oops. Don't mark stale. Hold on ... <|||||>Hi @Raman-Kumar ! Do you need a hand on this PR? :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, @clefourrier, soon I will **must** complete (I was focusing on my new full-time job recently)<|||||>Congrats on your new job! I'll have some time off soon so I'll be less responsive, but feel free to ping me nonetheless!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>In July Month, I work on this. I have more holiday on that month
transformers
21,744
closed
Update doctest GH workflow file
# What does this PR do? Just as in other workflow files, this PR adds a step in doctest workflow file ``` - name: Show installed libraries and their versions run: pip freeze ``` so we can quicly access this information whenever necessary, so faster debugging. It took me some time to find the fix shown in #21742
02-22-2023 19:06:02
02-22-2023 19:06:02
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,743
closed
[`tests`] add `accelerate` marker
# What does this PR do? This PR introduces a nice utils for users and contributors (such as myself) that want to run just `accelerate` specific tests. Thanks to @fxmarty that has introduced to me this util By using `pytest.mark` you can run `accelerate`-specific tests easily. With this PR, if let's say I want to run `accelerate` tests on OPT I can just do: ```bash RUN_SLOW=1 pytest -m accelerate_tests tests/models/opt/test_modeling_opt.py ``` cc @ydshieh @sgugger
02-22-2023 16:17:57
02-22-2023 16:17:57
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @younesbelkada ! I understand the advantage of adding this. However, I would recommend to follow what we have like `is_pt_tf_cross_test` as in https://github.com/huggingface/transformers/blob/619d51e01f326d298687912d1f65f8d460f2a6e2/src/transformers/testing_utils.py#L150 Along this approach, you also have to check https://github.com/huggingface/transformers/blob/619d51e01f326d298687912d1f65f8d460f2a6e2/src/transformers/testing_utils.py#L142 and (I am not familiar with this part hoever) https://github.com/huggingface/transformers/blob/619d51e01f326d298687912d1f65f8d460f2a6e2/conftest.py#L34 This is a suggestion based my intuition - I never did this before in `transformers`. You can wait a bit to hear what @sgugger says. <|||||>Thanks a lot for sharing that! I agree for consistency it would make sense to do this, however this solution seems IMO much simpler, given the number of affected tests (usually for each model there are only 3 `accelerate` tests), let's see what @sgugger will say!<|||||>> Thanks a lot for sharing that! I agree for consistency it would make sense to do this, however this solution seems IMO much simpler, given the number of affected tests (usually for each model there are only 3 `accelerate` tests) My response here is not to convince @younesbelkada the suggested approach, but just want to point out the above argument doesn't seem very valid: if you look `tests/test_modeling_common.py`, there is only one test decorated with `is_pt_tf_cross_test`, and yet we still use that approach.<|||||>The difference with the PT/TF cross tests is that we want to skip them unless a special env var is set (to not run them in the torch job and tf job for instance). This is not necessary here since as we don't want to prevent those tests from running in the regular jobs, this is just for convenience.<|||||>The failing test seems to be unrelated to this PR, merging!
transformers
21,742
closed
Fix 2 quicktour file doctest
# What does this PR do? Update expect output values due to some update of package or Hub repo file.
02-22-2023 15:54:13
02-22-2023 15:54:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,741
closed
Add ALIGN to transformers
# What does this PR do? Adds [ALIGN](https://arxiv.org/abs/2102.05918) to transformers, a multi-modal model similar to CLIP. ALIGN uses EfficientNet as its vision encoder and BERT as its text encoder. No public implementation is available for this model, the code is adapted from Kakao Brain's tensorflow implementation shared with us. To do: - [x] Upload converted model - [x] Create a model card - [x] Fix processor tests - [x] Fix model tests ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X ] Did you write any new necessary tests?
02-22-2023 14:59:07
02-22-2023 14:59:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for adding this model! Most comments are just formatting nits. > > Two main comments to address: > > * Can you modify the model prefix to Align, rather than ALIGN? > * All of the vision encoder architecture is, as far as I can tell, exactly the same as the EfficientNet model. And the text encoder is directly imported from Bert. Is there a reason for reimplementing EfficientNet but importing Bert? Thank you! I can modify the model prefix to Align but then I will have to change README titles to Align in order to not get repo-consistency errors and I'd like to keep the model name in the documentation the same as the original work. Could we keep it as it is, similar to BERT and CLIP? cc @sgugger The Kakao Brain EfficientNet implementation is slightly different from the official one (final layers), hence I'm copy pasting the modules and changing the final layers.<|||||>> The Kakao Brain EfficientNet implementation is slightly different from the official one (final layers), hence I'm copy pasting the modules and changing the final layers. OK, I understand better now. I realised there's something I overlooked in my initial review: why are we using a randomly initialized Bert model as the text encoder? My understanding of this model is a text and vision backbone are loaded in and then trained to align their embeddings i.e. the weights we should be loading are the respective BERT and EfficientNet weights post-training and the api would be something like `AlignModel.from_pretrained(google/align-efficientnetv2-bert-large')`. Is this correct? I agree that it'd make more sense to initialize BERT and EfficientNet from pretrained checkpoints but here is the experiment setup described in the paper: > We train our ALIGN models from scratch, using the open sourced implementation of EfficientNet as the image encoder and BERT as the text encoder. The Kakao Brain implementation also trains models from scratch, should I keep it in line with the paper or use the respective repos? Both are fine with me<|||||> > I can modify the model prefix to Align but then I will have to change README titles to Align in order to not get repo-consistency errors and I'd like to keep the model name in the documentation the same as the original work. Could we keep it as it is, similar to BERT and CLIP? cc @sgugger > My mistake, I figured out how to keep the documentation title camel cased, disregard this please. <|||||>Ah, OK. Sorry for the confusion. Just to make sure I understand: Google haven't released their weights; kakao Brain have themselves trained the ALIGN model and these are the weights we're using. Is this right?<|||||>> Ah, OK. Sorry for the confusion. Just to make sure I understand: Google haven't released their weights; kakao Brain have themselves trained the ALIGN model and these are the weights we're using. Is this right? No problem at all, Kakao Brain implemented and trained the ALIGN model themselves, Google haven't released checkpoints nor the code.<|||||>I think all comments are addressed, pinging @amyeroberts and @sgugger for the final review
transformers
21,740
closed
[examples/summarization] deal with `max_length` and `num_beams`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Hello 👋, 1. In the examples/pytorch/summarization/run_summarization.py, when the `generation_max_length` is not defined, the parameter `max_length` for `generate` function will be set to `val_max_target_length` in the line [675](https://github.com/huggingface/transformers/blob/82e61f34451dbea2de8d2220d51b0609d605dfd7/examples/pytorch/summarization/run_summarization.py#L675-L679), and be used for the **final evaluation** and prediction after training. However, the `max_length` used for the **evaluation during the training** will be possibly set to `None` or the `max_length` defined in the model's config. It's not a consistent behavior. Here I tried to unify this parameter before the `Seq2SeqTrainer` is initialized. Also applied the same to `num_beams`. 2. In the examples/pytorch/summarization/run_summarization_no_trainer.py, fixed the parameter `max_length`. Think it has higher priority than `val_max_target_length`. Please ignore this PR if it's intended behavior :) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. cc: @sgugger @patil-suraj <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-22-2023 13:34:33
02-22-2023 13:34:33
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sgugger, just removed the `max_length`! I also added another modif which separates the preprocessing of the training and the validation set in order to truncate the outputs by `max_target_length` and `val_max_target_length` respectively
transformers
21,739
closed
Pegasus Tokenizer is throwing away newline tokens
### System Info it's important for my model to learn where the newlines should be placed in the output, and from my understanding, this information is being removed by the Pegasus tokenizer (applicable to the latest version of the Transformers/Tokenizer library): For example, if my target output is ``` SECTION HEADING \n\nHere is the output for this section, cool! ``` If I encode and decode through the tokenizer, it becomes ``` SECTION HEADING Here is the output for this section, cool! So I guess my question would be ``` Am I missing something, and is there some toggle I can enable that would allow for the tokenizer to preserve new lines? If there is not a toggle, is there a reason that one shouldn't be added? Of course I have the option of pre-processing my text to convert new lines to `<n>` and then post-processing to turn the `<n>` back to `\n`, but seems a little hacky for my liking 😅 ### Who can help? @ArthurZucker, @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` tokenizer = transformers.AutoTokenizer.from_pretrained("google/pegasus-x-base") sample = "I am a section \n \n Now I should be a few lines below!" inputs = tokenizer.encode(sample, return_tensors="pt") out = tokenizer.decode(inputs[0]) ``` Out is ``` 'I am a section Now I should be a few lines below!</s>' ``` So it is stripping out the newline characters ### Expected behavior It should not strip out the newline characters, or I should have the option to tell the tokenizer not to remove newlines (This functionality may already exist and I'm just unaware of it)
02-22-2023 13:33:04
02-22-2023 13:33:04
Even more concise: ``` >>> tokenizer.encode('\n') [1] >>> tokenizer.decode(1) '</s>' ```<|||||>Hey! Thanks for posting this. The reason why the new lines are automatically removed is because this is the default behavior for `Pegasus` see [here](https://github.com/google-research/pegasus/blob/939830367bcf411193d2b5eca2f2f90f3f9260ca/pegasus/ops/sp_text_encoder.cc#L79), where they have a `preserve_new_line` variable. I don't really know why we don't, but you can add this token as part of the `added_vocab` in order to make sure that it is tready as a special token and not removed. That is the quickest fix . Otherwise, I can open a PR to add this argument (as it was in the original code I guess it makes sense?). I also need to check what it the common practice for that, cc @Narsil if you have an idea<|||||>The `\n` characters get removed by the normalization of this model. It's both within the `setnencepiece.model` file and the equivalent fast tokenizer (PrecompiledCharMap). The only way to get those back would be to modify the normalizer. But since the model vocab doesn't contain any such tokens, you're going to end up with only `unk` everywhere there's a newline. Is it possible for you to use `return_offsets_mapping` to get the offsets and see where those missing values are ? ```python encoded = tokenizer("This \n is", return_offsets_mapping=True) encoded["offset_mapping"] # [(0, 4), (7, 9), (0, 0)] ``` Those show the "hole" within the original string. This can account for newlines as well as other regularized content. It's the only generally working approach for these kind of things. Since there is no newline in the vocab, the model will never be able to output back any new lines so I'm not sure adding an option will be of any help here @ArthurZucker Those offsets show there's<|||||>Thanks for the feedback, @Narsil and @ArthurZucker! Based on your feedback I did this: ``` >>> tokenizer = transformers.AutoTokenizer.from_pretrained("google/pegasus-x-base") >>> token = tokenizers.AddedToken(content="\n", normalized=False) >>> tokenizer.add_tokens(list([token]) >>> sample = "I am a section \n \n Now I should be a few lines below!" >>> inputs = tokenizer.encode(sample, return_tensors="pt") >>> inputs tensor([[ 125, 346, 114, 1201, 96103, 96103, 1032, 125, 246, 129, 114, 324, 1569, 487, 147, 1]]) >>> out = tokenizer.decode(inputs[0]) >>> out 'I am a section\n\n Now I should be a few lines below!</s>' ``` So it appears that everything is ok now. Is there something wrong with my approach, since it's not exactly what you recommended?<|||||>It is exactly what I was recommending 👍🏻 also you can now push the tokenizer to the hub and that should be it! If you do not want the model to strip right and left for this token, you can also control that. Glad this solved your issue!
transformers
21,738
closed
Generate: Fix GIT batched captioning
# What does this PR do? Fixes #21714 (batched image captioning with GIT not working) The problem, at a higher level, boils down to a previously incomplete `batch_size` inference when no `input_ids` nor `inputs_embeds` was being passed to `.generate()` in decoder-only models -- we were always assuming `batch_size=1`, which is not correct in some multimodal models like GIT. If the `.generate()` doesn't receive `input_ids`, then some input tensor must live in `model_kwargs`. Now, we look for tensors in `model_kwargs` and use them as a source of information to determine the `batch_size`, which is then used to initialize `input_ids` with the correct shape. 👉 changes also made on the TF side 👉 a test was added to ensure we don't regress
02-22-2023 13:10:46
02-22-2023 13:10:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,737
closed
[`Generate`] Fix `gradient_checkpointing` and `use_cache` bug for generate-compatible models
## Feature request When using a model that uses `gradient_checkpointing` and if a user wants to call `generate` with `use_cache`, it leads some models to bugs, such as the one described in https://github.com/huggingface/transformers/pull/21733 The fix should be to slightly refactor some models following the same procedure as in the aforementioned PR ### How to participate 1. If it is your first time here, have a quick look at our [contribution guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) 🤗 2. Pick a model from the list below. Check in the comments here if it hasn't been claimed yet. 3. Claim your models in the comments (e.g. "I want to work on GPT2") 4. Replicate the changes of [this PR](https://github.com/huggingface/transformers/pull/21733) to your model of choice. In other words, move the `if` block to the line above the `... if use_cache else None`, in the same `.forward()` function. Please note that some models may have more than one instance of this block! 5. Make sure you've run our automated code formatting tool (i.e. run `make fixup` in your shell -- also run `make fix-copies` if it requests you to do so) 6. Open a PR. Tag @younesbelkada or @gante (one of us is enough) That's it! With each change, you'll be making `transformers` a little bit better for all of us 💛 ### Models to fix: - [x] Bart | https://github.com/huggingface/transformers/pull/21866 - [x] Bert - [x] BigBird | https://github.com/huggingface/transformers/pull/21882 - [x] BigBirdPegasus - [x] BioGPT | https://github.com/huggingface/transformers/pull/21844 - [x] Blenderbot - [x] BlenderbotSmall - [x] BlipText - [x] Bloom - [x] CodeGen - [x] Esm - [x] Git | https://github.com/huggingface/transformers/pull/21818 - [x] GPT2 | https://github.com/huggingface/transformers/pull/21772 - [x] GptNeo | https://github.com/huggingface/transformers/pull/21733 - [x] GptNeoX | https://github.com/huggingface/transformers/pull/21815 - [x] GPT-J - [x] ImageGPT | https://github.com/huggingface/transformers/pull/21816 - [x] LED | https://github.com/huggingface/transformers/pull/21840 - [x] LongT5 - [x] M2M100 | https://github.com/huggingface/transformers/pull/21841 - [x] Marian | https://github.com/huggingface/transformers/pull/21842 - [x] MBart | https://github.com/huggingface/transformers/pull/21918 - [x] MegratronBert | https://github.com/huggingface/transformers/pull/21921 - [x] MVP | https://github.com/huggingface/transformers/pull/21920 - [x] OPT - [x] Pegasus - [x] PegasusX - [x] ProphetNet | https://github.com/huggingface/transformers/pull/21772 - [x] RemBert - [x] RoFormer - [x] Speech2Text - [x] Speech2Text2 - [x] SpeechT5 - [x] SwitchTransformer - [x] T5 - [x] TimeSeriesTransformer - [x] TrajectoryTransformer - [x] TrOCR - [x] Whisper - [x] XGLM - [x] XLMRobertaXL - [x] Xmod
02-22-2023 12:45:18
02-22-2023 12:45:18
@younesbelkada I am a little confused on where the list for generate-compatible models is located. I'd like to pick up this issue if I can find it!<|||||>Hello @mollerup23 Thanks for your interest, we will update the list with @gante once #21733 gets merged !<|||||>@younesbelkada Looks like it will be essentially the same fix across the other models too. Do you want me to pull that fix into a utility function once merged? Just for illustration, something like - ```py use_cache = should_use_cache(logger, use_cache, self.gradient_checkpointing, self.training) presents = () if use_cache else None ``` and likely in modeling_utils.py - ```py def should_use_cache(logger: Logger, use_cache: bool, gradient_checkpointing: bool, training: bool) -> bool: if use_cache: if gradient_checkpointing and training: logger.warning( "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." ) else: return True return False ``` Was looking into making the fix and realized there would be some repetition so thought I'd ask<|||||>Hey @connor-henderson 👋 Thank you for the suggestion! Usually, I'd give the green light to configuration-related DRY approaches such as the one you suggested. However, this one would sit right in `forward()`, and we prioritize clear code (= avoid abstractions) in the modeling code itself. In case you're curious about this position, we have a blog post about why we do it [here](https://huggingface.co/blog/transformers-design-philosophy) 🤗 <|||||>@mollerup23 the list and the instructions are updated, in case you're interested in contributing :D <|||||>Would like to take GPT-2!<|||||>I want to work on GPT-J!<|||||>I would like to work on Blenderbot <|||||>Happy to take on Git, GptNeoX, ImageGPT, LED, LongT5, M2M100, Marian, MBart, MegratronBert, MVP, OPT, Pegasus, PegasusX, RemBert, RoFormer<|||||>Thanks a mile @KMFODA ! 💯 Feel free to take those, and tag me or @gante whenever you feel ready!<|||||>Hi, I am a newbie to open source and would like to contribute. @younesbelkada can I contribute to this issue? <|||||>Hey @saswatmeher Of course yes!! You can pick up a model that has not been taken yet, for example `BioGpt` and do the following: 1- Fork this repository 2- Clone your fork locally and create a new branch `git checkout -b fix-bio-gpt-issue` 3- Modify the file `src/transformers/models/biogpt/modeling_biogpt.py` the same way as all the contributors have modified their files in #21818 #21833 #21815 etc. (You can check `Files Changed` on the PR, on the right top of the Pull Request page) 4- Apply these changes and push the changes on your branch 5- Finally open a Pull Request between `fix-bio-gpt-issue` and `main` branch of `transformers` (+ tag us, myself + @gante ) and we should be good to go! Let us know if you have more questions!<|||||>I am happy to pick up other models too. Can I work on Bart, Bert, BigBird.<|||||>Hello, can I work on Bloom?<|||||>Hi @asrimanth , yes sure you can!<|||||>> @mollerup23 the list and the instructions are updated, in case you're interested in contributing :D Great! I'd like to work on OPT<|||||>HI @gante working on Whisper XGLM XLMRobertaXL<|||||>@mollerup23 hey! OPT was claimed by @KMFODA a few comments above :) Still plenty of models up for grabs, though!<|||||>Hello 👋, I would like to contribute and work on T5. Let me know, Thanks! [PR](https://github.com/huggingface/transformers/pull/22036) for the suggested changes.<|||||>@younesbelkada Can I claim TimeSeriesTransformer?<|||||>hi @mollerup23 Of course yes! Please feel free to take it!<|||||>Hey @krypticmouse! Do you need any help for making the fix on GPT-j? <|||||>Hi @younesbelkada, Thanks for asking. My PR got merged long ago.<|||||>Thanks for the heads up, just updated the table, the only model left seems to be TimeSeries Transformer then, thank you all for the great contribution!<|||||>Hey @younesbelkada, may I work on the TimeSeries Transformer? <|||||>@annahung31 I believe @mollerup23 is working on it :) @mollerup23, can you confirm?<|||||>yes @gante @annahung31 , the PR is here: https://github.com/huggingface/transformers/pull/22272
transformers
21,736
closed
How to disable model parallelism and enable data parallelism when using Accelerate and `device_map='auto'`?
### System Info - `transformers` version: 4.27.0.dev0 - Platform: Linux-5.4.15-1.el7.elrepo.x86_64-x86_64-with-glibc2.27 - Python version: 3.10.9 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @pacman100 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I got this error when finetuning "EleutherAI/gpt-j-6B" using LoRA on 8×2080ti: `RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1` Reproduce steps: clone this repo: https://github.com/CarperAI/trlx modify the script: examples/summarize_rlhf/sft/train_gptj_summarize.py ``` import random import os import evaluate import numpy as np import torch import torch.nn as nn from peft import LoraConfig, get_peft_model from summarize_dataset import TLDRDataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, Trainer, TrainingArguments, default_data_collator, ) def set_seed(seed_val=42): random.seed(seed_val) np.random.seed(seed_val) torch.manual_seed(seed_val) torch.cuda.manual_seed_all(seed_val) if __name__ == "__main__": output_dir = "gptj-supervised-summarize-checkpoint" train_batch_size = 4 gradient_accumulation_steps = 1 learning_rate = 1e-5 eval_batch_size = 1 eval_steps = 500 max_input_length = 550 save_steps = 1000 num_train_epochs = 5 random.seed(42) os.environ["WANDB_DISABLED"] = "true" tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", use_cache=False, load_in_8bit=True, device_map='auto') tokenizer.pad_token = tokenizer.eos_token model.resize_token_embeddings(len(tokenizer)) tokenizer.pad_token_id = tokenizer.eos_token_id model.config.end_token_id = tokenizer.eos_token_id model.config.pad_token_id = model.config.eos_token_id for param in model.parameters(): param.requires_grad = False # freeze the model - train adapters later if param.ndim == 1: # cast the small parameters (e.g. layernorm) to fp32 for stability param.data = param.data.to(torch.float32) model.gradient_checkpointing_enable() model.enable_input_require_grads() class CastOutputToFloat(nn.Sequential): def forward(self, x): return super().forward(x).to(torch.float32) model.lm_head = CastOutputToFloat(model.lm_head) config = LoraConfig( r=16, lora_alpha=32, target_modules=["q_proj", "v_proj"], lora_dropout=0.05, bias="none", task_type="CAUSAL_LM" ) model = get_peft_model(model, config) # Set up the datasets data_path = "CarperAI/openai_summarize_tldr" train_dataset = TLDRDataset( data_path, tokenizer, "train", max_length=max_input_length, ) dev_dataset = TLDRDataset( data_path, tokenizer, "valid", max_length=max_input_length, ) # Set up the metric rouge = evaluate.load("rouge") def compute_metrics(eval_preds): labels_ids = eval_preds.label_ids pred_ids = eval_preds.predictions pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True) label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True) result = rouge.compute(predictions=pred_str, references=label_str) return result # Create a preprocessing function to extract out the proper logits from the model output def preprocess_logits_for_metrics(logits, labels): if isinstance(logits, tuple): logits = logits[0] return logits.argmax(dim=-1) # Prepare the trainer and start training training_args = TrainingArguments( output_dir=output_dir, evaluation_strategy="steps", eval_accumulation_steps=1, learning_rate=learning_rate, per_device_train_batch_size=train_batch_size, per_device_eval_batch_size=eval_batch_size, gradient_checkpointing=True, half_precision_backend="auto", fp16=True, adam_beta1=0.9, adam_beta2=0.95, gradient_accumulation_steps=gradient_accumulation_steps, num_train_epochs=num_train_epochs, warmup_steps=100, eval_steps=eval_steps, save_steps=save_steps, load_best_model_at_end=True, logging_steps=50, # deepspeed="examples/summarize_rlhf/sft/ds_config_gptj.json", ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=dev_dataset, compute_metrics=compute_metrics, data_collator=default_data_collator, preprocess_logits_for_metrics=preprocess_logits_for_metrics, ) trainer.train() trainer.save_model(output_dir) ``` and run: `accelerate launch --num_processes 8 examples/summarize_rlhf/sft/train_gptj_summarize.py` Full error logs: ``` ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /data/trlx/examples/summarize_rlhf/sft/train_gptj_summarize_lora_acc.py:154 in <module> │ │ │ │ 151 │ │ data_collator=default_data_collator, │ │ 152 │ │ preprocess_logits_for_metrics=preprocess_logits_for_metrics, │ │ 153 │ ) │ │ ❱ 154 │ trainer.train() │ │ 155 │ trainer.save_model(output_dir) │ │ 156 │ │ │ │ /data/transformers/src/transformers/trainer.py:1631 in train │ │ │ │ 1628 │ │ inner_training_loop = find_executable_batch_size( │ │ 1629 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │ │ 1630 │ │ ) │ │ ❱ 1631 │ │ return inner_training_loop( │ │ 1632 │ │ │ args=args, │ │ 1633 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │ │ 1634 │ │ │ trial=trial, │ │ │ │ /data/transformers/src/transformers/trainer.py:1898 in _inner_training_loop │ │ │ │ 1895 │ │ │ │ │ with model.no_sync(): │ │ 1896 │ │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │ │ 1897 │ │ │ │ else: │ │ ❱ 1898 │ │ │ │ │ tr_loss_step = self.training_step(model, inputs) │ │ 1899 │ │ │ │ │ │ 1900 │ │ │ │ if ( │ │ 1901 │ │ │ │ │ args.logging_nan_inf_filter │ │ │ │ /data/transformers/src/transformers/trainer.py:2643 in training_step │ │ │ │ 2640 │ │ │ return loss_mb.reduce_mean().detach().to(self.args.device) │ │ 2641 │ │ │ │ 2642 │ │ with self.compute_loss_context_manager(): │ │ ❱ 2643 │ │ │ loss = self.compute_loss(model, inputs) │ │ 2644 │ │ │ │ 2645 │ │ if self.args.n_gpu > 1: │ │ 2646 │ │ │ loss = loss.mean() # mean() to average on multi-gpu parallel training │ │ │ │ /data/transformers/src/transformers/trainer.py:2675 in compute_loss │ │ │ │ 2672 │ │ │ labels = inputs.pop("labels") │ │ 2673 │ │ else: │ │ 2674 │ │ │ labels = None │ │ ❱ 2675 │ │ outputs = model(**inputs) │ │ 2676 │ │ # Save past state if it exists │ │ 2677 │ │ # TODO: this needs to be fixed and made cleaner later. │ │ 2678 │ │ if self.args.past_index >= 0: │ │ │ │ /home/chenmingrui/miniconda3/envs/petals/lib/python3.10/site-packages/torch/nn/modules/module.py │ │ :1194 in _call_impl │ │ │ │ 1191 │ │ # this function, and just call forward. │ │ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │ │ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │ │ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │ │ 1195 │ │ # Do not call functions when jit is used │ │ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │ │ │ │ /home/chenmingrui/miniconda3/envs/petals/lib/python3.10/site-packages/torch/nn/parallel/data_par │ │ allel.py:157 in forward │ │ │ │ 154 │ │ │ │ │ 155 │ │ │ for t in chain(self.module.parameters(), self.module.buffers()): │ │ 156 │ │ │ │ if t.device != self.src_device_obj: │ │ ❱ 157 │ │ │ │ │ raise RuntimeError("module must have its parameters and buffers " │ │ 158 │ │ │ │ │ │ │ │ │ "on device {} (device_ids[0]) but found one of " │ │ 159 │ │ │ │ │ │ │ │ │ "them on device: {}".format(self.src_device_obj, │ │ 160 │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1 ``` ### Expected behavior I'm using 8×2080ti. When training using 1×2080ti and running `python examples/summarize_rlhf/sft/train_gptj_summarize.py`, the above code runs normally, which means the model and data can fit in only one gpu. Then I want to use data parallelism and do not use model parallelism, just like DDP. The `load_in_8bit` option in `.from_pretrained()` requires setting `device_map` option. With `device_map='auto'`, it seems that the model is loaded on several gpus, as in naive model parallelism, which results in this error: `RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1` while training. May be setting `device_map` correctly should solve this problem, but I can't find how to do this in document.
02-22-2023 11:49:13
02-22-2023 11:49:13
Hello @chenmingjiongjiong What is the VRAM of your GPU? can you alternatively try `device_map={'':torch.cuda.current_device()}`?<|||||>> can you alternatively try `device_map={'':torch.cuda.current_device()}` This solved my problem. Thanks! Then I got another error about bitsandbytes, I have submitted an issue in their [repo](https://github.com/TimDettmers/bitsandbytes/issues/162). <|||||>> Wow, this is interesting! Could you explain why this trick works?<|||||>Sure @beyondguo Per my understanding, and if I got it right it should very simple. `device_map={"":0}` simply means "try to fit the entire model on the device 0" - device 0 in this case would be the GPU-0 In a distributed setting `torch.cuda.current_device()` should return the current device the process is working on. If you have 4 GPUs and running DDP with 4 processes each process should be working on an independent GPU, meaning that if each process load a model with `device_map={"":i}` the process `i` will try to fit the entire model on the GPU `i`, this leads to properly having `n` working processes that have a replica of the model. I remember I had some issues while using `torch.cuda.current_device()` therefore now I advise users to use `accelerate` instead and retrieve the current process index with the following trick: ```python from accelerate import Accelerator dummy_accelerator = Accelerator() current_device = dummy_accelerator.process_index ``` Let me know if anything is unclear<|||||>Thanks @younesbelkada Now I'm using LoRA to tune a LLM (ChatGLM-6B) using 2 * A800 80G. I've got some findings that really confuse me. The first problem: - Setting `device_map="auto”` to my understanding means setting model parallelization (MP), which will put the model layers into different devices. Thus, during training, only one GPU is calculating. - Setting `model.is_parallelizable=False` means I don't want to set MP. However, if I both set `device_map="auto”` and `model.is_parallelizable=False`, model parallelization is still activated. I think `model.is_parallelizable=False` should block the model parallelization. Second problem: - Setting `device_map={'':torch.cuda.current_device()}`, it means the model is copied to both GPUs. - Setting device_map="auto", I see the model to split into two parts: However, I found the latter method consumes nearly the save GPU memories per GPU as the first method. Why? I thought it should only consume half the memories per GPU compared with the first method. --- One more thing, using `device_map="auto"`, the batch size is halved, compared with `device_map={'':torch.cuda.current_device()}`, however, it is even 1.5 x faster! Could you please explain why this happens? Many thanks!<|||||>Hi @beyondguo Thanks for looping back 1- Yes setting device_map = auto means that you want to set Model Parallelism, meaning putting the model into different GPU layers and one GPU at a time will be used 2- I think in the latest versions of transformers this argument is not needed anymore Regarding the second problem I think this is expected, if you run things correctly if you have a copy of the model in 2 GPUs you will also have 2 copies of the optimizer states and the input data will be also split across both processes<|||||>Thanks for your detailed reply! @younesbelkada To my understanding, when using `device_map="auto"`, only a subset of all layers is allocated to one GPU, which should lead to **lower** GPU consumption. However, it consumes **nearly the same** GPU memories as setting `device_map={'':torch.cuda.current_device()}`.<|||||>I see, thanks for your reply! Can you provide more details (how many GBs allocated, which model, etc.?) Thanks!<|||||>Sure. Model: ChatGLM-6B device: 4 * A800-80G 70 GBs allocated for each GPU. The code I'm using is https://github.com/beyondguo/LLM-Tuning/blob/796384e837b3b6d70564d50ef5bb46f9175cb700/chatglm_lora_tuning.py#L87 <|||||>Thanks for sharing those > Model: ChatGLM-6B I see the model is running in full precision, a 6B model would require 24GB VRAM just to be loaded on the GPU > 70 GBs allocated for each GPU. Do you run your script using `torch.distributed.run` or just `python yourscript.py`?<|||||>simply `python yourscript.py`, I'm using Trainer, which I think should automatically manage the GPU allocation.<|||||>I see better now, if you want to benefit from data parallelism as mentioned here: https://github.com/huggingface/transformers/issues/21736#issuecomment-1595699638 or in the original message from the author you need 2 things: - use the main branch of transformers that contains multiple fixes of accelerate + Trainer integration - run `accelerate config` --> select multi GPU then run your script with `accelerate launch yourscript.py`. to make sure that only the main process saves the model you can add a simple check in the `model.save_pretrained` and do something like that instead: ```python if trainer.accelerator.is_main_process: model.save_pretrained(training_args.output_dir) ```<|||||>Thanks! I will try these later.<|||||>Hi @younesbelkada Sorry to bother you again. I'm still working on the "device_map" thing... I'm curious how does `transformers` automatically allocate the layers to different GPUs. When I load the [ChatGLM-6B](https://huggingface.co/THUDM/chatglm-6b/blob/main/modeling_chatglm.py) model, using `device_map="auto"`, I see the layers are allocated to: ``` {'transformer.word_embeddings': 0, 'lm_head': 0, <----- 'transformer.layers.0': 0, 'transformer.layers.1': 0, 'transformer.layers.2': 0, 'transformer.layers.3': 0, 'transformer.layers.4': 0, 'transformer.layers.5': 1, 'transformer.layers.6': 1, 'transformer.layers.7': 1, 'transformer.layers.8': 1, 'transformer.layers.9': 1, 'transformer.layers.10': 1, 'transformer.layers.11': 1, 'transformer.layers.12': 1, 'transformer.layers.13': 1, 'transformer.layers.14': 2, 'transformer.layers.15': 2, 'transformer.layers.16': 2, 'transformer.layers.17': 2, 'transformer.layers.18': 2, 'transformer.layers.19': 2, 'transformer.layers.20': 2, 'transformer.layers.21': 2, 'transformer.layers.22': 2, ... 'transformer.layers.24': 3, 'transformer.layers.25': 3, 'transformer.layers.26': 3, 'transformer.layers.27': 3, 'transformer.final_layernorm': 3} ``` And when I change the model to [ChatGLM2-6B](https://huggingface.co/THUDM/chatglm2-6b/blob/main/modeling_chatglm.py), the allocation is: ``` {'transformer.embedding': 0, 'transformer.rotary_pos_emb': 0, 'transformer.encoder.layers.0': 0, 'transformer.encoder.layers.1': 0, 'transformer.encoder.layers.2': 0, 'transformer.encoder.layers.3': 0, 'transformer.encoder.layers.4': 0, 'transformer.encoder.layers.5': 0, 'transformer.encoder.layers.6': 1, 'transformer.encoder.layers.7': 1, 'transformer.encoder.layers.8': 1, 'transformer.encoder.layers.9': 1, 'transformer.encoder.layers.10': 1, 'transformer.encoder.layers.11': 1, 'transformer.encoder.layers.12': 1, 'transformer.encoder.layers.13': 1, 'transformer.encoder.layers.14': 2, 'transformer.encoder.layers.15': 2, 'transformer.encoder.layers.16': 2, 'transformer.encoder.layers.17': 2, 'transformer.encoder.layers.18': 2, 'transformer.encoder.layers.19': 2, 'transformer.encoder.layers.20': 2, 'transformer.encoder.layers.21': 2, 'transformer.encoder.layers.22': 3, ... 'transformer.encoder.layers.25': 3, 'transformer.encoder.layers.26': 3, 'transformer.encoder.layers.27': 3, 'transformer.encoder.final_layernorm': 3, 'transformer.output_layer': 3} <----- ``` My question is, the `lm_head` layer in ChatGLM-6B and the `output_layer` in ChatGLM2-6B are both the **last** layer of the models, but why `lm_head` is in `cuda:0` (same as the input layer), the `output_layer` is put in `cuda:3` (different from the input layer). Because of this, when I train the ChatGLM-6B, every things is fine; but when I train the ChatGLM2-6B, an error occurs during the model forward pass loss computing: `RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:3 and cuda:0! (when checking argument for argument target in method wrapper_CUDA_nll_loss_forward)` Do you know what's the problem? How can I fix this? Many thanks! --- update: I have a workaround (which I think is too ugly, lol): ```python model.hf_device_map['transformer.output_layer'] = model.hf_device_map['transformer.embedding'] model = AutoModel.from_pretrained("THUDM/chatglm2-6b", trust_remote_code=True, device_map=model.hf_device_map) ``` which is to manually change the `output_layer`'s device, and reload the model. <|||||>Hi @beyondguo Thanks for the ping, and no problem at all `device_map='auto'` will dispatch the model evenly across all available GPUs. I think the issue you are facing is related to the fact that for the first model the weight is probably tied with the embedding layer (i.e. they are the same), hence the device of that layer being on the first GPU device. For the second model, maybe the lm_head is not tied to the embedding layer. Regarding your solution, I think it looks fine, you can probably load the first model on the meta device using `init_empty_weights()` context manager from accelerate and make it slightly more efficient. Thanks!
transformers
21,735
closed
Fix resume_from_checkpoint for deepspeed
@stas00 This fixes the resume_from_checkpoint for deepspeed, by ensuring that the deepspeed engine is the one to load the checkpoint. # What does this PR do? It disables the regular load_from_checkpoint, and allowing it to go to the deepspeed engine instead. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-22-2023 10:47:50
02-22-2023 10:47:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21735). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @mosheber! The CI is not triggered. It seems there is an issue with your CircleCI permissions, the tests won't run. Could you first try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-)? Thank you, and let me know if the CI could be triggered after this :-)<|||||>> @ydshieh @sgugger , thank you for approving! For some reason the CircleCI still wont run properly, I tried logging out, revoking, logging back in, refreshing, yet to avail. Could you perform the merge on your end? Or perhaps trigger the CI run? <|||||>Hi @mosheber Thank you for trying. I will push an empty commit to your PR branch to trigger CI - are you OK with it?<|||||>Well, I think one more step to follow from your side: (as I am not able to trigger even with a push) Could you check if you are following huggingface/transformers instead of your own fork. You can check it at this link https://app.circleci.com/projects/project-dashboard/github/mosheber/ If you are following your own fork, you have to unfollow it, and follow `huggingface/transformers` instead. > If a user submits a pull request to your repository from a fork, but no pipeline is triggered, then the user most likely is following a project fork on their personal account rather than the project itself of CircleCI, causing the jobs to trigger under the user’s personal account and not the organization account. To resolve this issue, have the user unfollow their fork of the project on CircleCI and instead follow the source project. This will trigger their jobs to run under the organization when they submit pull requests. https://circleci.com/docs/oss/#build-pull-requests-from-forked-repositories<|||||>If you are OK, I can also fork your PR, create a new one, but add your name as a contributor then merge the new PR. This might be easier.<|||||>> If you are OK, I can also fork your PR, create a new one, but add your name as a contributor then merge the new PR. This might be easier. This seems to be the easiest approach, lets go with that<|||||>well, while I pushed to `huggingface/transformers`, the CI here is triggered ... let's see.<|||||>> well, while I pushed to `huggingface/transformers`, the CI here is triggered ... let's see. Looks like all checks have passed, thanks! Merging should now be possible <|||||>> > well, while I pushed to `huggingface/transformers`, the CI here is triggered ... let's see. > > Looks like all checks have passed, thanks! Merging should now be possible Yeah, but see my comment https://github.com/huggingface/transformers/pull/21735#discussion_r1117246877<|||||>> Looks like all checks have passed, thanks! Merging should now be possible But it's not ready. Please revisit: https://github.com/huggingface/transformers/pull/21735#discussion_r1116267795<|||||>> > Looks like all checks have passed, thanks! Merging should now be possible > > But it's not ready. Please revisit: [#21735 (comment)](https://github.com/huggingface/transformers/pull/21735#discussion_r1116267795) Sure thing, I removed it and changed the elif<|||||>Thanks Moshe, I need to run offline tests since deepspeed tests don't run on live CI (need gpus), and will merge once all is green there.<|||||>@ydshieh - do you by chance have any idea why CI isn't running? This time it appears to be some other problem than the original one we discussed in this issue. Thank you!<|||||>Moshe, CircleCI doesn't like something about your CircleCI account settings. Since there is nothing I can do about it and I don't want this to drag forever I've recreated your PR here https://github.com/huggingface/transformers/pull/21798 - so let's finish it there. Thank you. <|||||>> Moshe, CircleCI doesn't like something about your CircleCI account settings. Since there is nothing I can do about it and I don't want this to drag forever I've recreated your PR here #21798 - so let's finish it there. Thank you. No problem. Just in case, I also tried to trigger another CI run after unfollowing the project as suggested here: https://support.circleci.com/hc/en-us/articles/360008097173-Troubleshooting-why-pull-requests-are-not-triggering-jobs-on-my-organization- Maybe it will help too<|||||>Thank you for digging this up, Moshe. The relevant quote seems to be: > If you're following the fork instead of the upstream repo > > A user who submits a pull request to your repository from a fork, but no pipeline is triggered with the pull request. This can happen when the user is following the project fork on their personal account rather than the project itself on CircleCI. > > This will cause the jobs to trigger under the user's personal account. If the user is following a fork of the repository on CircleCI, we will only build on that fork and not the parent, so the parent’s PR will not get status updates. > > In these cases, the user unfollows their fork of the project on CircleCI. This will trigger their jobs to run under the organization when they submit pull requests. Those users can optionally follow the source project if they wish to see the pipelines. which as you said you did and the CI has started. Excellent work - now we would know what to tell the users in the future.<|||||>> Glad I could help! Thanks!
transformers
21,734
closed
Discrepancies between scores shapes between doc/generate methods
### System Info - `transformers` version: 4.26.1 - Platform: Linux-6.0.19-4-MANJARO-x86_64-with-glibc2.37 - Python version: 3.10.9 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I use the generate method to retrieve the probability distributions at each step of the generation. I rely on `scores` to get them. I noticed discrepancies between the shape of `scores` between the doc, the actual behavior and between different generation configuration (differences that should not change anything, I believe). The following snippet highlight the main problem: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-base") tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-base") string_inputs = [ "translate in french: I love cats", "Answer the question: what is 3+5 ?" ] inputs = tokenizer(string_inputs, padding=True, truncation=True, return_tensors='pt') outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, num_beams=7, num_return_sequences=5) print(type(outputs)) print(outputs.scores[0].shape) # Output: # <class 'transformers.generation.utils.BeamSearchEncoderDecoderOutput'> # torch.Size([14, 32128]) # This is consistent with the doc and is the expected behavior: 14 = (batch_size=2)*(num_beams=7) # https://huggingface.co/docs/transformers/internal/generation_utils#transformers.generation.BeamSearchEncoderDecoderOutput outputs = model.generate(**inputs, return_dict_in_generate=True, output_scores=True, num_beams=7, num_return_sequences=5, do_sample=True) print(type(outputs)) print(outputs.scores[0].shape) # Output: # <class 'transformers.generation.utils.BeamSampleEncoderDecoderOutput'> # torch.Size([70, 32128]) # This is not consistent with the documentation seems not to be the expected behavior: # 70 = (batch_size=2)*(num_beams=7)*(num_return_sequences=5) # https://huggingface.co/docs/transformers/internal/generation_utils#transformers.generation.BeamSampleEncoderDecoderOutput ``` It is worth pointing out that in the documentation, the shape `batch_size*num_beams*num_return_sequences` is to be expected with DecoderOnlyOutput; however I don't understand why there would be a difference, nor what the `num_return_sequences` does there anyway. ### Expected behavior The shape of an element of scores should always be `(batch_size*num_beam, vocab_size)`, and not `(batch_size*num_beam*num_return_sequences, vocab_size)`. If I missed something and I'm, in fact mistaken, at the very least the documentation is not consistent with the actual behavior.
02-22-2023 10:20:42
02-22-2023 10:20:42
Hey @icannos 👋 Here the problem is the documentation, which is incomplete :) Have a look at my answer in our forum, for a similar question -- https://discuss.huggingface.co/t/t5-why-do-we-have-more-tokens-expressed-via-cross-attentions-than-the-decoded-sequence/31893<|||||>Updating the docs is in our TODO list!<|||||>> Hey @icannos wave Here the problem is the documentation, which is incomplete :) > > Have a look at my answer in our forum, for a similar question -- https://discuss.huggingface.co/t/t5-why-do-we-have-more-tokens-expressed-via-cross-attentions-than-the-decoded-sequence/31893 Thanks for your quick reply but I'm not sure it answers my problem. It is not about the length of the generated sequences but about the number of beams reported. I believe the behavior should be identical in both the examples I gave. I don't understand why `num_return_sequences` is at all involved in the shape of scores. Moreover, it is not involved when there is not sampling but is there when sampling is involved.<|||||>Hey @icannos -- my apologies. I was multitasking and did not read your issue with proper attention. You're right, my answer does not address your question. The difference in behavior you describe stems from these two lines: [beam search](https://github.com/huggingface/transformers/blob/82e61f34451dbea2de8d2220d51b0609d605dfd7/src/transformers/generation/utils.py#L1470) // [beam sample](https://github.com/huggingface/transformers/blob/82e61f34451dbea2de8d2220d51b0609d605dfd7/src/transformers/generation/utils.py#L1507). According to the code specified there, the behavior you see is correct (even if not properly documented!). This means that `beam_sample` runs `num_return_sequences` independent beam searches for a given input (with the next token being selected from sampling, rather than simply the most likely token) and keeps the top beam in each one, as opposed to simply drawing the top `num_return_sequences` beams from one beam search, resulting in `num_return_sequences` times more `scores`. This is done to increase diversity in the output: the top `num_return_sequences` of a given beam search tend to be very similar to each other. Does this answer your questions? 🤗<|||||>Yes I had found those lines, but I did think it was a mistake or misleading behavior. I don't know how it could be done, but I feel like it would be good that the behaviors were the same for all generation methods. So I guess, if I want the sequences of probability distribution for each sequence I should reshape the scores to be `batch_size, num_return_sequences, num_beam, vocab_size` and retrieve `scores[:, :, 0, :]` right ? Thank for your help! <|||||>@icannos > I don't know how it could be done, but I feel like it would be good that the behaviors were the same for all generation methods. Yeah... it's hard to balance a consistent API with consistent results. In this particular case, they are at odds with each other :( We favor consistent results in an attempt to maximize usefulness, as only advanced users tend to fiddle with these details -- how can a beginner know that gathering multiple beams from a single stochastic beam search is a bad idea? 😛 > So I guess, if I want the sequences of probability distribution for each sequence I should reshape the scores to be batch_size, num_return_sequences, num_beam, vocab_size and retrieve scores[:, :, 0, :] right ? Either that or slicing, yes <|||||>Thanks a lot for your help, I'll close the issue. Yeah I get the results driven API. I hope at least this issue might help someone in the future stumbling upon these problems.<|||||>@gante I would like to reopen this issue, I'm getting it even with greedy decoding (which should not run beam search AFAIK). When calling `output = model.generate(**input, output_scores=True, renormalize_logits=True, max_new_tokens=3, min_new_tokens=1, do_sample=False, return_dict_in_generate=True)`, on accessing `output['scores']`, I get 3 tensors, each of which are of shape (batch_size, 32128). This does not make much sense because flan-t5 only has 32100 tokens in its vocab. It's not clear where these extra scores are coming from, and which ones I should ignore?<|||||>Hey @adivekar-utexas 👋 Many models run computations with embedding sizes larger than the actual vocabulary size, for speed purposes (e.g. [see here why](https://twitter.com/karpathy/status/1621578354024677377)). Any time you see an embedding size larger than the vocabulary size, you can safely discard the tokens whose index is beyond the vocabulary size :)<|||||>Thanks for the confirmation @gante ! For reference for others finding this issue, the necessary fix is to truncate the vocab dimension to `len(tokenizer.get_vocab())`, i.e. `scores_at_timesteps: List[Tensor] = [scores_at_timestep[:, len(tokenizer.get_vocab())] for scores_at_timestep in output['scores']]` `tokenizer.vocab_size` sometimes does not take into account special tokes.
transformers
21,733
closed
[`GPTNeo`] Fix gradient checkpointing bug
# What does this PR do? This PR fixes a small bug that a user can encounter while using `generate` and models that use `gradient_checkpointing` . In the context of `trl` we call `generate` on the active model that uses `gradient_checkpointing` to save memory. Currently this snippet fails on `main`: ```python import torch from transformers import AutoModelForCausalLM # Now let's build the model, the reference model, and the tokenizer. model = AutoModelForCausalLM.from_pretrained("edbeeching/gpt-neo-125M-imdb", device_map="auto") model.train() model.gradient_checkpointing_enable() gen = model.generate(input_ids=torch.LongTensor([0,1,2,3]).unsqueeze(0)) ``` This is because `gradient_checkpointing_enable` and `use_cache` are not compatible, and calling `generate` uses `use_cache` by default. Though there is a check to force `use_cache` to be set to `False`, this check is not enough since the `present` tuple will be still tried to be populated but remains empty since it will be intialized with an empty tuple, Hence leading to a confusing error. IMO the fix should be to force-set `presents` tuple to `None` if the model is using `gradient_checkpointing` cc @sgugger @amyeroberts
02-22-2023 09:01:31
02-22-2023 09:01:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante <|||||>Change looks reasonable to me. Could you provide links to the check you refer to where `use_cache` is set to False and also add the error output so this solution can be found easily if encountered again please? <|||||>Sure, here is the traceback: ```bash │ /home/younes_huggingface_co/code/transformers/src/transformers/generation/utils.py:1402 in │ │ generate │ │ │ │ 1399 │ │ │ │ ) │ │ 1400 │ │ │ │ │ 1401 │ │ │ # 11. run greedy search │ │ ❱ 1402 │ │ │ return self.greedy_search( │ │ 1403 │ │ │ │ input_ids, │ │ 1404 │ │ │ │ logits_processor=logits_processor, │ │ 1405 │ │ │ │ stopping_criteria=stopping_criteria, │ │ │ │ /home/younes_huggingface_co/code/transformers/src/transformers/generation/utils.py:2197 in │ │ greedy_search │ │ │ │ 2194 │ │ │ model_inputs = self.prepare_inputs_for_generation(input_ids, **model_kwargs) │ │ 2195 │ │ │ │ │ 2196 │ │ │ # forward pass to get next token │ │ ❱ 2197 │ │ │ outputs = self( │ │ 2198 │ │ │ │ **model_inputs, │ │ 2199 │ │ │ │ return_dict=True, │ │ 2200 │ │ │ │ output_attentions=output_attentions, │ │ │ │ /home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/site-packages/torch/nn/module │ │ s/module.py:1194 in _call_impl │ │ │ │ 1191 │ │ # this function, and just call forward. │ │ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │ │ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │ │ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │ │ 1195 │ │ # Do not call functions when jit is used │ │ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │ │ │ │ /home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/site-packages/accelerate/hook │ │ s.py:158 in new_forward │ │ │ │ 155 │ │ │ with torch.no_grad(): │ │ 156 │ │ │ │ output = old_forward(*args, **kwargs) │ │ 157 │ │ else: │ │ ❱ 158 │ │ │ output = old_forward(*args, **kwargs) │ │ 159 │ │ return module._hf_hook.post_forward(module, output) │ │ 160 │ │ │ 161 │ module.forward = new_forward │ │ │ │ /home/younes_huggingface_co/code/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.p │ │ y:739 in forward │ │ │ │ 736 │ │ """ │ │ 737 │ │ return_dict = return_dict if return_dict is not None else self.config.use_return │ │ 738 │ │ │ │ ❱ 739 │ │ transformer_outputs = self.transformer( │ │ 740 │ │ │ input_ids, │ │ 741 │ │ │ past_key_values=past_key_values, │ │ 742 │ │ │ attention_mask=attention_mask, │ │ │ │ /home/younes_huggingface_co/miniconda3/envs/fix-test/lib/python3.9/site-packages/torch/nn/module │ │ s/module.py:1194 in _call_impl │ │ │ │ 1191 │ │ # this function, and just call forward. │ │ 1192 │ │ if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o │ │ 1193 │ │ │ │ or _global_forward_hooks or _global_forward_pre_hooks): │ │ ❱ 1194 │ │ │ return forward_call(*input, **kwargs) │ │ 1195 │ │ # Do not call functions when jit is used │ │ 1196 │ │ full_backward_hooks, non_full_backward_hooks = [], [] │ │ 1197 │ │ if self._backward_hooks or _global_backward_hooks: │ │ │ │ /home/younes_huggingface_co/code/transformers/src/transformers/models/gpt_neo/modeling_gpt_neo.p │ │ y:545 in forward │ │ │ │ 542 │ │ │ past_length = 0 │ │ 543 │ │ │ past_key_values = tuple([None] * len(self.h)) │ │ 544 │ │ else: │ │ ❱ 545 │ │ │ past_length = past_key_values[0][0].size(-2) │ │ 546 │ │ │ │ 547 │ │ if position_ids is None: │ │ 548 │ │ │ position_ids = torch.arange(past_length, input_shape[-1] + past_length, dtyp │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ IndexError: tuple index out of range ``` And `use_cache` is force-set to False [here](https://github.com/huggingface/transformers/blob/aff87da15b04b260c6057dd47a70376f2b2386f3/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L515) but `presents` are already initialized [here](https://github.com/huggingface/transformers/blob/aff87da15b04b260c6057dd47a70376f2b2386f3/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L503), hence returned [here](https://github.com/huggingface/transformers/blob/aff87da15b04b260c6057dd47a70376f2b2386f3/src/transformers/models/gpt_neox/modeling_gpt_neox.py#L555) <|||||>The reason for the change sounds good 👍 Two comments, though: 1. We can solve this problem without adding more logic -- e.g. moving `if` that sets `use_cache=False` up in the class, before `presents` is initialized. Less logic = simpler code = fewer bugs = 💛 2. This is a widespread pattern in generate-compatible models. If we make this change, the least we should do is open an issue with the `Good First Issue` label that tracks which models have already received the change!<|||||>Thanks a lot for these suggestions @gante! This makes a lot of sense, I have adapted the code accordingly and drafted a Good First issue that we can edit once we figure out if the bug persists for other models<|||||>I'm getting same error in t5, is this the same reason ? File /usr/local/lib/python3.8/dist-packages/transformers/models/t5/modeling_t5.py:506, in T5Attention.forward.<locals>.project(hidden_states, proj_layer, key_value_states, past_key_value) 502 if past_key_value is not None: 503 if key_value_states is None: 504 # self-attn 505 # (batch_size, n_heads, key_length, dim_per_head) --> 506 hidden_states = torch.cat([past_key_value, hidden_states], dim=2) 507 elif past_key_value.shape[2] != key_value_states.shape[1]: 508 # checking that the `sequence_length` of the `past_key_value` is the same as 509 # the provided `key_value_states` to support prefix tuning 510 # cross-attn 511 # (batch_size, n_heads, seq_length, dim_per_head) 512 hidden_states = shape(proj_layer(key_value_states)) RuntimeError: Sizes of tensors must match except in dimension 2. Expected size 16 but got size 32 for tensor number 1 in the list.<|||||>Not sure in this case, can you share a reproducible script?
transformers
21,732
closed
AttributeError: 'GenerationConfig' object has no attribute 'architectures'
### System Info - `transformers` version: 4.26.1 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.10 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. Run the code below: ```python from transformers import pipeline, set_seed, GenerationConfig config = GenerationConfig(max_new_tokens=500, temperature=1.2, num_return_sequences=1) generator = pipeline('text-generation', model='gpt2-xl', device=0, config = config) ``` 2. Get this error: ``` Traceback (most recent call last): File "C:\Users\Cooper Lynn\gpt\bot.py", line 4, in <module> generator = pipeline('text-generation', model='gpt2-xl', device=0, config = config) File "C:\Users\Cooper Lynn\gpt\.env\lib\site-packages\transformers\pipelines\__init__.py", line 754, in pipeline framework, model = infer_framework_load_model( File "C:\Users\Cooper Lynn\gpt\.env\lib\site-packages\transformers\pipelines\base.py", line 224, in infer_framework_load_model if config.architectures: AttributeError: 'GenerationConfig' object has no attribute 'architectures' ``` # Extra info I can't access the documentation for the `GenerationConfig()` function. ### Expected behavior No error.
02-21-2023 23:42:01
02-21-2023 23:42:01
Until this is fixed I'll use the decapitated method.<|||||>Hello @Ccode-lang Thanks for the issue , I think the correct keyword argument here is `generate_config`: ```python from transformers import pipeline, set_seed, GenerationConfig config = GenerationConfig(max_new_tokens=500, temperature=1.2, num_return_sequences=1) generator = pipeline('text-generation', model='gpt2-xl', device=0, generate_config = config) generator("Hello") ``` cc @gante in case I missed something<|||||>Oh, thanks! That doesn't show up on the docs so I didn't know about this parameter only `config` showed up. I'll test this later.<|||||>Now I get this error: ``` warnings.warn( Traceback (most recent call last): File "C:\Users\Cooper Lynn\gpt\bot.py", line 53, in <module> in_out() File "C:\Users\Cooper Lynn\gpt\bot.py", line 47, in in_out response = generator(text)[0]['generated_text'].removeprefix(text).split("\n")[0][1:] File "C:\Users\Cooper Lynn\gpt\.env\lib\site-packages\transformers\pipelines\text_generation.py", line 210, in __call__ return super().__call__(text_inputs, **kwargs) File "C:\Users\Cooper Lynn\gpt\.env\lib\site-packages\transformers\pipelines\base.py", line 1084, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "C:\Users\Cooper Lynn\gpt\.env\lib\site-packages\transformers\pipelines\base.py", line 1091, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "C:\Users\Cooper Lynn\gpt\.env\lib\site-packages\transformers\pipelines\base.py", line 992, in forward model_outputs = self._forward(model_inputs, **forward_params) File "C:\Users\Cooper Lynn\gpt\.env\lib\site-packages\transformers\pipelines\text_generation.py", line 252, in _forward generated_sequence = self.model.generate(input_ids=input_ids, attention_mask=attention_mask, **generate_kwargs) File "C:\Users\Cooper Lynn\gpt\.env\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\Cooper Lynn\gpt\.env\lib\site-packages\transformers\generation\utils.py", line 1197, in generate self._validate_model_kwargs(model_kwargs.copy()) File "C:\Users\Cooper Lynn\gpt\.env\lib\site-packages\transformers\generation\utils.py", line 1090, in _validate_model_kwargs raise ValueError( ValueError: The following `model_kwargs` are not used by the model: ['generate_config'] (note: typos in the generate arguments will also show up in this list) ``` <|||||>I'm going to guess this is because I am using GPT-2?<|||||>Hey all -- the actual keyword for the argument is `generation_config`, and not `generate_config` :) `pipeline()` accepts any argument that `.generate()` does<|||||>Oh ok, thanks! I'm going to fix this once and for all now :rofl: <|||||>Ok that works, thanks for all of your help.
transformers
21,731
closed
Fix `GPTSanJapaneseModel`
# What does this PR do? ``` return_dict = return_dict if return_dict is not None else self.config.return_dict ``` should be ``` return_dict = return_dict if return_dict is not None else self.config.use_return_dict ``` as in many other places. Otherwise torchscript tests will fail.
02-21-2023 19:26:29
02-21-2023 19:26:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,730
closed
[`MBart`] Fix cross attention mask check
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/21728 The current `MBart` code leads to an error that is hard to interpret for users due to a possible typo as pointed out in https://github.com/huggingface/transformers/issues/21728 To reproduce: ```python import torch from transformers import MBartModel input_ids = torch.LongTensor([[0, 1, 1, 0]]) model = MBartModel.from_pretrained("facebook/mbart-large-cc25") head_mask = None cross_attn_head_mask = torch.ones(1000, 1, 1, 1) model(input_ids, head_mask=head_mask, cross_attn_head_mask=cross_attn_head_mask) ``` This PR fixes this cc @sgugger
02-21-2023 16:50:20
02-21-2023 16:50:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,729
closed
Added "Open in Colab" to task guides
Some of the task guides did not have the "Open in Colab" option, which can be very useful in this type of docs. This small PR adds the option.
02-21-2023 15:34:00
02-21-2023 15:34:00
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,728
closed
Error with shape-assertion regarding head_masks in mbart_modeling file
### System Info file: https://github.com/huggingface/transformers/blob/v4.26.1/src/transformers/models/mbart/modeling_mbart.py commit-hash: fd5cdaeea6eafac32e9d967327bfa3dc0e0d962d ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Call the forward method of MBartModel with: - head_mask=None, - decoder_head_mask=None, and - cross_attn_mask=torch.ones(mismatching_layer_size, number_of_heads) ### Expected behavior **Expected Output:** The cross_attn_head_mask should be specified for X layers, but it is for Y. **Actual Output:** 'NoneType' object has no attribute 'size' **Potential fix:** The head_mask from line 1058 should probably reference the attn_mask from line 1053.
02-21-2023 14:53:48
02-21-2023 14:53:48
Great catch! The PR https://github.com/huggingface/transformers/pull/21730 should add the fix you proposed
transformers
21,727
closed
Fix to KerasMetricCallback when the model returns unstructured output
The `KerasMetricCallback` was only tested on `transformers` models, which usually return dict-like `ModelOutput`. As a result, I missed a bug when the model is a more classical Keras model that just returns a naked array. Thanks to @leadbetterben for pointing out the issue. Fixes #21674 .
02-21-2023 14:44:39
02-21-2023 14:44:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>I don't even know if we're testing this callback at all, because it's not really a core piece of `transformers`. I'll merge this for now and then think about where we could add some tests in a future PR!
transformers
21,726
closed
Fix `ErnieMEmbeddings` device issue
# What does this PR do? Fix `ErnieMEmbeddings` CI failure for ```bash tests/models/ernie_m/test_modeling_ernie_m.py::ErnieMModelTest::test_multi_gpu_data_parallel_forward ``` See comments in the changes.
02-21-2023 13:06:55
02-21-2023 13:06:55
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,725
closed
Make ImageProcessorMixin compatible with subfolder kwarg
Adds subfolder support to the ImageProcessorMixin to be able to load models from a specific subfolder on the HuggingFace repo. See the issue: https://github.com/huggingface/transformers/issues/21715
02-21-2023 11:47:10
02-21-2023 11:47:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I have added the test. Let me know your thoughts please.
transformers
21,724
closed
Invalid header value when loading "bert-base-uncased"
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.26.1 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ```python Python 3.10.9 (main, Jan 11 2023, 15:21:40) [GCC 11.2.0] Type 'copyright', 'credits' or 'license' for more information IPython 8.10.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: from transformers import AutoTokenizer In [2]: tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[2], line 1 ----> 1 tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") File ~/miniconda3/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:598, in AutoTokenizer.from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 595 return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 597 # Next, let's try to use the tokenizer_config file to get the tokenizer class. --> 598 tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs) 599 if "_commit_hash" in tokenizer_config: 600 kwargs["_commit_hash"] = tokenizer_config["_commit_hash"] File ~/miniconda3/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py:442, in get_tokenizer_config(pretrained_model_name_or_path, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, **kwargs) 380 """ 381 Loads the tokenizer configuration from a pretrained model tokenizer configuration. 382 (...) 439 tokenizer_config = get_tokenizer_config("tokenizer-test") 440 ```""" 441 commit_hash = kwargs.get("_commit_hash", None) --> 442 resolved_config_file = cached_file( 443 pretrained_model_name_or_path, 444 TOKENIZER_CONFIG_FILE, 445 cache_dir=cache_dir, 446 force_download=force_download, 447 resume_download=resume_download, 448 proxies=proxies, 449 use_auth_token=use_auth_token, 450 revision=revision, 451 local_files_only=local_files_only, 452 subfolder=subfolder, 453 _raise_exceptions_for_missing_entries=False, 454 _raise_exceptions_for_connection_errors=False, 455 _commit_hash=commit_hash, 456 ) 457 if resolved_config_file is None: 458 logger.info("Could not locate the tokenizer configuration file, will try to use the model config instead.") File ~/miniconda3/lib/python3.10/site-packages/transformers/utils/hub.py:409, in cached_file(path_or_repo_id, filename, cache_dir, force_download, resume_download, proxies, use_auth_token, revision, local_files_only, subfolder, user_agent, _raise_exceptions_for_missing_entries, _raise_exceptions_for_connection_errors, _commit_hash) 406 user_agent = http_user_agent(user_agent) 407 try: 408 # Load from URL or cache if already cached --> 409 resolved_file = hf_hub_download( 410 path_or_repo_id, 411 filename, 412 subfolder=None if len(subfolder) == 0 else subfolder, 413 revision=revision, 414 cache_dir=cache_dir, 415 user_agent=user_agent, 416 force_download=force_download, 417 proxies=proxies, 418 resume_download=resume_download, 419 use_auth_token=use_auth_token, 420 local_files_only=local_files_only, 421 ) 423 except RepositoryNotFoundError: 424 raise EnvironmentError( 425 f"{path_or_repo_id} is not a local folder and is not a valid model identifier " 426 "listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to " 427 "pass a token having permission to this repo with `use_auth_token` or log in with " 428 "`huggingface-cli login` and pass `use_auth_token=True`." 429 ) File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs) 119 if check_use_auth_token: 120 kwargs = smoothly_deprecate_use_auth_token( 121 fn_name=fn.__name__, has_token=has_token, kwargs=kwargs 122 ) --> 124 return fn(*args, **kwargs) File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/file_download.py:1106, in hf_hub_download(repo_id, filename, subfolder, repo_type, revision, library_name, library_version, cache_dir, user_agent, force_download, force_filename, proxies, etag_timeout, resume_download, token, local_files_only, legacy_cache_layout) 1104 try: 1105 try: -> 1106 metadata = get_hf_file_metadata( 1107 url=url, 1108 token=token, 1109 proxies=proxies, 1110 timeout=etag_timeout, 1111 ) 1112 except EntryNotFoundError as http_error: 1113 # Cache the non-existence of the file and raise 1114 commit_hash = http_error.response.headers.get( 1115 HUGGINGFACE_HEADER_X_REPO_COMMIT 1116 ) File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124, in validate_hf_hub_args.<locals>._inner_fn(*args, **kwargs) 119 if check_use_auth_token: 120 kwargs = smoothly_deprecate_use_auth_token( 121 fn_name=fn.__name__, has_token=has_token, kwargs=kwargs 122 ) --> 124 return fn(*args, **kwargs) File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/file_download.py:1432, in get_hf_file_metadata(url, token, proxies, timeout) 1429 headers = build_hf_headers(token=token) 1431 # Retrieve metadata -> 1432 r = _request_wrapper( 1433 method="HEAD", 1434 url=url, 1435 headers=headers, 1436 allow_redirects=False, 1437 follow_relative_redirects=True, 1438 proxies=proxies, 1439 timeout=timeout, 1440 ) 1441 hf_raise_for_status(r) 1443 # Return File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/file_download.py:405, in _request_wrapper(method, url, max_retries, base_wait_time, max_wait_time, timeout, follow_relative_redirects, **params) 403 # 2. Force relative redirection 404 if follow_relative_redirects: --> 405 response = _request_wrapper( 406 method=method, 407 url=url, 408 max_retries=max_retries, 409 base_wait_time=base_wait_time, 410 max_wait_time=max_wait_time, 411 timeout=timeout, 412 follow_relative_redirects=False, 413 **params, 414 ) 416 # If redirection, we redirect only relative paths. 417 # This is useful in case of a renamed repository. 418 if 300 <= response.status_code <= 399: File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/file_download.py:440, in _request_wrapper(method, url, max_retries, base_wait_time, max_wait_time, timeout, follow_relative_redirects, **params) 437 return response 439 # 3. Exponential backoff --> 440 return http_backoff( 441 method=method, 442 url=url, 443 max_retries=max_retries, 444 base_wait_time=base_wait_time, 445 max_wait_time=max_wait_time, 446 retry_on_exceptions=(ConnectTimeout, ProxyError), 447 retry_on_status_codes=(), 448 timeout=timeout, 449 **params, 450 ) File ~/miniconda3/lib/python3.10/site-packages/huggingface_hub/utils/_http.py:129, in http_backoff(method, url, max_retries, base_wait_time, max_wait_time, retry_on_exceptions, retry_on_status_codes, **kwargs) 126 kwargs["data"].seek(io_obj_initial_pos) 128 # Perform request and return if status_code is not in the retry list. --> 129 response = requests.request(method=method, url=url, **kwargs) 130 if response.status_code not in retry_on_status_codes: 131 return response File ~/miniconda3/lib/python3.10/site-packages/requests/api.py:59, in request(method, url, **kwargs) 55 # By using the 'with' statement we are sure the session is closed, thus we 56 # avoid leaving sockets open which can trigger a ResourceWarning in some 57 # cases, and look like a memory leak in others. 58 with sessions.Session() as session: ---> 59 return session.request(method=method, url=url, **kwargs) File ~/miniconda3/lib/python3.10/site-packages/requests/sessions.py:587, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 582 send_kwargs = { 583 "timeout": timeout, 584 "allow_redirects": allow_redirects, 585 } 586 send_kwargs.update(settings) --> 587 resp = self.send(prep, **send_kwargs) 589 return resp File ~/miniconda3/lib/python3.10/site-packages/requests/sessions.py:701, in Session.send(self, request, **kwargs) 698 start = preferred_clock() 700 # Send the request --> 701 r = adapter.send(request, **kwargs) 703 # Total elapsed time of the request (approximately) 704 elapsed = preferred_clock() - start File ~/miniconda3/lib/python3.10/site-packages/requests/adapters.py:489, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies) 487 try: 488 if not chunked: --> 489 resp = conn.urlopen( 490 method=request.method, 491 url=url, 492 body=request.body, 493 headers=request.headers, 494 redirect=False, 495 assert_same_host=False, 496 preload_content=False, 497 decode_content=False, 498 retries=self.max_retries, 499 timeout=timeout, 500 ) 502 # Send the request. 503 else: 504 if hasattr(conn, "proxy_pool"): File ~/miniconda3/lib/python3.10/site-packages/urllib3/connectionpool.py:703, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 700 self._prepare_proxy(conn) 702 # Make the request on the httplib connection object. --> 703 httplib_response = self._make_request( 704 conn, 705 method, 706 url, 707 timeout=timeout_obj, 708 body=body, 709 headers=headers, 710 chunked=chunked, 711 ) 713 # If we're going to release the connection in ``finally:``, then 714 # the response doesn't need to know about the connection. Otherwise 715 # it will also try to release it and we'll have a double-release 716 # mess. 717 response_conn = conn if not release_conn else None File ~/miniconda3/lib/python3.10/site-packages/urllib3/connectionpool.py:398, in HTTPConnectionPool._make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 396 conn.request_chunked(method, url, **httplib_request_kw) 397 else: --> 398 conn.request(method, url, **httplib_request_kw) 400 # We are swallowing BrokenPipeError (errno.EPIPE) since the server is 401 # legitimately able to close the connection after sending a valid response. 402 # With this behaviour, the received response is still readable. 403 except BrokenPipeError: 404 # Python 3 File ~/miniconda3/lib/python3.10/site-packages/urllib3/connection.py:239, in HTTPConnection.request(self, method, url, body, headers) 237 if "user-agent" not in (six.ensure_str(k.lower()) for k in headers): 238 headers["User-Agent"] = _get_default_user_agent() --> 239 super(HTTPConnection, self).request(method, url, body=body, headers=headers) File ~/miniconda3/lib/python3.10/http/client.py:1282, in HTTPConnection.request(self, method, url, body, headers, encode_chunked) 1279 def request(self, method, url, body=None, headers={}, *, 1280 encode_chunked=False): 1281 """Send a complete request to the server.""" -> 1282 self._send_request(method, url, body, headers, encode_chunked) File ~/miniconda3/lib/python3.10/http/client.py:1323, in HTTPConnection._send_request(self, method, url, body, headers, encode_chunked) 1320 encode_chunked = False 1322 for hdr, value in headers.items(): -> 1323 self.putheader(hdr, value) 1324 if isinstance(body, str): 1325 # RFC 2616 Section 3.7.1 says that text default has a 1326 # default charset of iso-8859-1. 1327 body = _encode(body, 'body') File ~/miniconda3/lib/python3.10/site-packages/urllib3/connection.py:224, in HTTPConnection.putheader(self, header, *values) 222 """ """ 223 if not any(isinstance(v, str) and v == SKIP_HEADER for v in values): --> 224 _HTTPConnection.putheader(self, header, *values) 225 elif six.ensure_str(header.lower()) not in SKIPPABLE_HEADERS: 226 raise ValueError( 227 "urllib3.util.SKIP_HEADER only supports '%s'" 228 % ("', '".join(map(str.title, sorted(SKIPPABLE_HEADERS))),) 229 ) File ~/miniconda3/lib/python3.10/http/client.py:1260, in HTTPConnection.putheader(self, header, *values) 1257 values[i] = str(one_value).encode('ascii') 1259 if _is_illegal_header_value(values[i]): -> 1260 raise ValueError('Invalid header value %r' % (values[i],)) 1262 value = b'\r\n\t'.join(values) 1263 header = header + b': ' + value ValueError: Invalid header value b'Bearer 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\n' ``` ### Expected behavior Above scripts should load the tokenizer normally but now it raises `ValueError: Invalid header value b` I am using Python from latest Miniconda and I only install `torch` `transformers`. Not sure what the cause of the issue.
02-21-2023 11:21:06
02-21-2023 11:21:06
Solved via deleting all cached files under ```~/.cache/huggingface```
transformers
21,723
closed
Change doc example for `BigBirdForQuestionAnswering`
# What does this PR do? Checkpoint `"abhinavkulkarni/bigbird-roberta-base-finetuned-squad"` is no longer public or is removed by the user. Use base model `google/bigbird-roberta-base` and not to test against expected outputs.
02-21-2023 11:08:42
02-21-2023 11:08:42
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,722
closed
Remove `gptsan_japanese` from doctest list to avoid GPU OOM
# What does this PR do? Remove `gptsan_japanese` from doctest list to avoid GPU OOM (which affects some other model doctesting)
02-21-2023 09:42:07
02-21-2023 09:42:07
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,721
closed
700 hp - 1250 hp
Audi A3 FWD 2.0T CHANGE Engine Turbocharger ES#4147635 R410 Turbo Upgrade Kit & Tuning Package The R410 Turbo Kit was designed for the track day enthusiast looking for increased horsepower and torque throughout the powerband, without sacrificing response and reliability. End user to work directly with 034Motorsport on tune - While software is included, the ECU will still need to be sent to 034 for the initial flash load* For vehicles with FSI engines only Mfg #034-145-1015ECS #ES#4147635 Add to Wish ListTrack & Share 4347.00 Audi 8P (2005-2013) 2.0T FWD For vehicles with FSI engines only ECS Images Upload PRODUCT DETAILS FITS THESE CARS Product Details End user to work directly with 034Motorsport on tune - While software is included, the ECU will still need to be sent to 034 for the initial flash load* For vehicles with FSI engines only 034Motorsport is proud to offer the R410 Hybrid Turbocharger Upgrade Kit for the 8J/8P Audi TT/A3 & MkV Volkswagen GTI/GLI 2.0T FSI! R410 was designed for the track day enthusiast who desires an improved usable powerband without sacrificing reliability. Providing significant increases in horsepower and torque throughout the powerband, the R410 Turbo Kit shines on track where the factory turbocharger can’t keep up. Consisting of an OEM+ hybrid turbocharger upgrade and 034Motorsport’s proprietary performance software, R410 is the elegant, reliable solution for breathtaking performance on the street or track, at an excellent price point. At the center of the 034Motorsport R410 Turbo Kit is the LOBA LO410-EA113DV drop-in hybrid turbocharger upgrade. Made in Germany and based on the factory Borg Warner unit, this turbocharger features a state-of-the-art billet compressor wheel to allow for higher flow. The backplate, compressor housing, turbine housing, and exhaust manifold have all been CNC-machined for optimal flow and increased performance. In addition, the LO410-EA113DV turbocharger features an upgraded thrust bearing and is precision-balanced to ensure reliability. 034Motorsport spent a significant amount of time developing and verifying our proprietary software to ensure that R410 delivers consistent, reliable power under grueling track conditions. Through optimization of the factory ECU's boost, fueling, and timing maps, the R410 Tuning Package brings out the potential of the LO410-EA113DV Turbocharger. Peak boost ranges from 20-23 PSI (octane dependent) and tapers to 18 PSI by the new 7,100 RPM redline to keep the turbo running at its optimum efficiency. This conservative, track-oriented boost mapping provides rock-solid performance lap after lap, and is combined with an advanced boost control strategy that allows for increased precision beyond factory limits. Going beyond power improvements, 034Motorsport’s calibrator also made improvements to the throttle mapping, increased idle stability, and enabled left-foot braking. The end result is a tune that drives as smoothly as a factory calibration, with power delivery that is consistent and manageable on the street and track alike. The R410 Hybrid Turbocharger Upgrade Kit installs as a drop-in replacement for the factory parts, without requiring extensive modifications to other components. Every R410 Tuning Package includes a fully-loaded PL34 Handheld Flash-Loader that allows the end user to reflash between 91, 93 and 100 octane files. Tuning Features: Developed In-House on the Street, Track, and 034Motorsport's Chassis Dyno Optimized Boost, Timing, and Fueling Maps for Increased Horsepower & Torque Includes PL34 Handheld Flash-Loader with 91 Octane, 93 Octane, 100, and 104 Octane Tunes Increased Rev Limiter to 7,100 RPM Speed Limiter (Governor) Removed Improved Throttle Response & Power Delivery Refined Throttle Mapping for Part Throttle Drive-ability Increased Idle Stability (Especially Helpful with Lightweight Flywheels!) Hardware Features: LOBA LO410-EA113DV Turbocharger LOBA CNC-Machined Billet Compressor Wheel 5-Axis CNC Re-Profiled Compressor Housing & Backplate 5-Axis CNC-Machined Turbine Housing Upgraded Thrust Bearing Upgraded Wastegate Actuator High-Precision Balancing Made in Germany Audi S3 FSI Fuel Injectors 155 bar HPFP PRV 3 bar MAP Sensor PL34 Hand-Held Flash Loader Installation Hardware Kit Included! Compatible Vehicles: 2006 - 2008 Audi TT (8J) 2.0T FSI 2006 - 2008 Audi A3  (8P) 2.0T FSI 2006 - 2008 Volkswagen Eos / GLI / GTI  (MkV) 2.0T FSI Required Supporting Modifications: High Pressure Fuel Pump (HPFP) Upgrade Performance Downpipe Upgrade Performance Intercooler Upgrade Performance Air Intake  Recommended Supporting Modifications: Low Pressure Fuel Pump (LPFP) Upgrade Tune Installation: Initial Installation: Flashed directly to your vehicle's ECU through the OBD-2 port using the included PL34 Handheld Flash-Loader. Program Switching: Once the initial flash is performed, end users can flash between programs using the included PL34 Handheld Flash-Loader.  Peak Horsepower & Torque: 91 Octane - 334 Horsepower / 319 Foot-Pounds of Torque 93 Octane - 354 Horsepower / 347 Foot-Pounds of Torque 100 Octane - 376 Horsepower / 353 Foot-Pounds of Torque Peak Horsepower & Torque Gains Under Curve: 91 Octane - 137 HP @ 6,400 RPM / 114 TQ @ 5,550 RPM 93 Octane - 155 HP @ 6,400 RPM / 136 TQ @ 4,600 RPM 100 Octane - 179 HP @ 6,400 RPM / 148 TQ @ 5,850 RPM Call Us330-331-2003 ChatLive Chat Company About Us Careers Contact Us ECS Blog Become a Dealer Sponsorships / Partnerships My Account Sign In / Create Account My Wish Lists My Cart My Vehicles My Orders Customer Service Shipping Policy Payment Methods Returns and Warranty Policy Track Order Submit a Return Product Warranty Site Security Vehicles BMW Parts VW Parts Audi Parts Mercedes Parts Porsche Parts MINI Parts Supported Vehicles Wanted: Development Vehicles Your Opinion Matters! We invite you to share your shopping experiences with ECS so we can better meet your needs
02-21-2023 09:11:46
02-21-2023 09:11:46
good thing that this is on GitHub and not on the hub
transformers
21,720
closed
Multi-GPU inference using accelerate giving inaccurate/gibberish results on RTX 4090s
### System Info I am trying to use pretrained opt-6.7b model for inference, by using "device_map" as "auto", "balanced", basically scenarios where model weights are spread across both GPUs; the results produced are inaccurate and gibberish. Although if model is loaded on just one GPU, it works fine. Tried opt-13b and similarly sized other models spread across both GPUs, as well, but all of them produce gibberish results. Also, tried the same with different GPUs, 2x RTX 3090, 2x A5000; works fine with these. Basically, only when a model is spread across multiple RTX 4090s, the results are gibberish. Specs: python=3.9 transformers==4.26.1 accelerate==0.16.0 cuda==11.6 Code: from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained("facebook/opt-6.7b", torch_dtype=torch.float16, device_map="auto") tokenizer = AutoTokenizer.from_pretrained("facebook/opt-6.7b", use_fast=False) prompt = "What is the color of carrots?\nAnswer:" input_ids = tokenizer(prompt, return_tensors="pt").input_ids.cuda() generated_ids = model.generate(input_ids) tokenizer.batch_decode(generated_ids, skip_special_tokens=True) ['What is the color of carrots?\nAnswer: toinhoza, and the other half of'] ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps: 1. Use 2 x RTX 4090 node 2. Install transformers, accelerate 3. Use above code to load and infer using opt model, load pretrained models, such as opt-6.7b/opt-13b 4. Set device_map as "auto" or "balanced" 5. Run inference ### Expected behavior With prompt = "What is the color of carrots?\nAnswer:" Result should be "What is the color of carrots?\nAnswer: Orange"
02-21-2023 08:35:03
02-21-2023 08:35:03
If you use `bfloat16` as a torch dtype, do you get the same results? I'm wondering if somehow the ops in float16 are badly implemented on those GPUs.<|||||>Just tried, still outputs gibberish. prompt = "What is the color of carrots?\nAnswer:" output (bfloat16) = 'What is the color of carrots?\nAnswer: " "' Important thing to note here is, if model is loaded on just one gpu, it works fine. That should help us narrow down the possible causes.<|||||>I have been reading up on this, seems to be a Nvidia driver issue, which is still unfixed. Basically, issue is with P2P memory access with the RTX 4090s. Seems like we are stuck with Nvidia to fix the issue.<|||||>Thanks for investigating. Do you have a link you could share so that others reading the issue later on can have all the info?<|||||>https://forums.developer.nvidia.com/t/standard-nvidia-cuda-tests-fail-with-dual-rtx-4090-linux-box<|||||>I tried running the same script in Windows, works fine. So, the conclusion is, we need to wait for Nvidia to update the drivers for Linux. Closing the issue. Thanks for your time @sgugger .
transformers
21,719
closed
fsmt translation with use_cache=True bug
### System Info I'm running [run_translation.py](https://github.com/huggingface/transformers/tree/main/examples/pytorch/translation) to train from scratch with fsmt artifacts. I'm using - `transformers` version: 4.27.0.dev0 - Platform: Linux-3.10.0-862.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu117 (True) ### Who can help? @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Detailed configuration `run_translation_config.json` as follow: ```json { "config_name": "allenai/wmt19-de-en-6-6-base", "tokenizer_name": "allenai/wmt16-en-de-12-1", "use_fast_tokenizer": false, "dataset_name": "wmt16", "dataset_config_name": "de-en", "source_lang": "en", "target_lang": "de", "max_source_length": 512, "max_target_length": 512, "generation_max_length": 512, "preprocessing_num_workers": 32, "output_dir": "./tmp/transformer_base_wmt16_6_6_en-de_fsmt", "run_name": "transformer_base_wmt16_6_6_en-de_fsmt", "deepspeed": "./ds_stage2_config.json", (NOT important) "sortish_sampler": true, "overwrite_output_dir": true, "do_train": true, "do_eval": true, "do_predict": false, "evaluation_strategy": "steps", "eval_steps": 2000, "predict_with_generate": true, "load_best_model_at_end": true, "logging_strategy": "steps", "logging_steps": 1000, "logging_first_step" :true, "eval_accumulation_steps": 16, "dataloader_num_workers": 32, "per_device_train_batch_size": 64, "per_device_eval_batch_size": 64, "gradient_accumulation_steps": 4, "fp16": true, "adam_beta1": 0.9, "adam_beta2":0.998, "adam_epsilon": 1e-08, "learning_rate": 5e-4, "weight_decay": 0.01, "label_smoothing_factor": 0.1, "lr_scheduler_type": "linear", "warmup_ratio": 0.04, "num_train_epochs": 25, "save_strategy": "steps", "save_steps": 2000, "save_total_limit": 10, "seed": 42 } ``` 2. run command ```shell python -m torch.distributed.launch --nproc_per_node 8 \ run_translation.py \ run_translation_config.json ``` ### Expected behavior ``` Traceback (most recent call last): File "run_translation.py", line 675, in <module> main() File "run_translation.py", line 592, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/ace14487oj/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/trainer.py", line 1570, in train return inner_training_loop( File "/home/ace14487oj/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/trainer.py", line 1835, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/ace14487oj/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/trainer.py", line 2583, in training_step loss = self.compute_loss(model, inputs) File "/home/ace14487oj/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/trainer.py", line 2625, in compute_loss loss = self.label_smoother(outputs, labels) File "/home/ace14487oj/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 498, in __call__ nll_loss = log_probs.gather(dim=-1, index=labels) RuntimeError: Size does not match at dimension 1 expected index [64, 104, 1] to be smaller than self [64, 1, 43536] apart from dimension 2 ``` After some investigation, I found the error was about use_cache implementation and only happens if I leave use_cache = true in fsmt_model_config. When I tried use_cache = false, the error disappear. I also tried to train BART with run_translation.py on exactly same parameters (use_cache = true), and similar issue does not happen on BART. Has anybody run into this issue as well or otherwise maybe confirm this is a bug? Thanks!
02-21-2023 06:48:30
02-21-2023 06:48:30
Hi @lihaoxin2020 Thanks for the issue! I tried to run the script with the command: ```bash python -m torch.distributed.launch --nproc_per_node 2 run_translation.py run_translation_config.json ``` and getting: ``` run_translation.py: error: the following arguments are required: --model_name_or_path, --output_dir ``` If I put the correct values for those flags I get `ValueError: Need either a dataset name or a training/validation file.` I will try to see if I can reproduce locally without having to run this script<|||||>Based on your traceback, here is a script I made to try to reproduce the issue, ```python import torch from transformers import AutoModelForSeq2SeqLM from transformers.trainer_pt_utils import LabelSmoother model = AutoModelForSeq2SeqLM.from_pretrained("allenai/wmt19-de-en-6-6-base") dummy_input = torch.LongTensor([[0, 1, 1, 2, 3]]) labels = torch.LongTensor([[0, 1, 1, 2, 3]]) outputs = model(input_ids=dummy_input, use_cache=True) label_smoother = LabelSmoother() loss = label_smoother(outputs, labels, shift_labels=True) ``` Does this script works for you? <|||||>> Based on your traceback, here is a script I made to try to reproduce the issue, > > ```python > import torch > from transformers import AutoModelForSeq2SeqLM > from transformers.trainer_pt_utils import LabelSmoother > > model = AutoModelForSeq2SeqLM.from_pretrained("allenai/wmt19-de-en-6-6-base") > dummy_input = torch.LongTensor([[0, 1, 1, 2, 3]]) > labels = torch.LongTensor([[0, 1, 1, 2, 3]]) > > outputs = model(input_ids=dummy_input, use_cache=True) > > label_smoother = LabelSmoother() > loss = label_smoother(outputs, labels, shift_labels=True) > ``` > > Does this script works for you? Hi @younesbelkada ! For this script, I think you need to try `outputs = model(input_ids=dummy_input, decoder_input_ids=labels, use_cache=True)` to make it equivalent to the context I referred to. I got the same error with the new `outputs` line: ``` Traceback (most recent call last): File "./playground.py", line 16, in <module> loss = label_smoother(outputs, labels, shift_labels=True) File "/mmfs1/home/lihaoxin/workspace/lihaoxin/anaconda3/envs/pytorch_p38/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 498, in __call__ nll_loss = log_probs.gather(dim=-1, index=labels) RuntimeError: Size does not match at dimension 1 expected index [1, 4, 1] to be smaller than self [1, 0, 43536] apart from dimension 2 ``` <|||||>Hi @lihaoxin2020 I managed to reproduce the issue, thanks a lot! As a quick fix I propose you to not use `use_cache` for now while we investigate what is happening ! Thanks!<|||||>I also had the same issue (albeit with a custom training script), here's what I think is happening: In the run_translation config you've set `label_smoothing_factor` to greater than 0. As a result, the `labels` field is removed from the model forward call on the [Trainer file](https://github.com/huggingface/transformers/blob/main/src/transformers/trainer.py) when computing loss: ``` if self.label_smoother is not None and "labels" in inputs: labels = inputs.pop("labels") ``` This is done because if labels is provided to the FSMT model when called, the loss will be calculated twice. (Once in the FSMT model call, and once in the label smoothing class). In [modeling_fsmt](https://github.com/huggingface/transformers/blob/main/src/transformers/models/fsmt/modeling_fsmt.py), we can see that use_cache is only implicitly disabled when labels is provided, which is not the case with label smoothing enabled: ``` if labels is not None: use_cache = False ``` In FSMTDecoder, we can see that when use_cache is enabled, even during training, it will slice off all of the input IDs except for the last whenever called: ``` if use_cache: input_ids = input_ids[:, -1:] positions = positions[:, -1:] # happens after we embed them ``` I don't know how BART handles it, but in [Marian](https://github.com/huggingface/transformers/blob/main/src/transformers/models/marian/modeling_marian.py) (which copies a lot of BART code), they only slice off the input_ids when preparing inputs for generation (past_key_values is probably only passed when use_cache is on): ``` def prepare_inputs_for_generation( self, input_ids, past_key_values=None, attention_mask=None, use_cache=None, **kwargs ): # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly if attention_mask is None: attention_mask = input_ids.new_ones(input_ids.shape) if past_key_values: input_ids = input_ids[:, -1:] ``` [younesbelkada](https://github.com/younesbelkada)'s solution definitely works for now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,718
closed
Fixed a bug in remove_handler function
Fixed the bug mentioned in this issue: #21506 And replaced `Assertion`s with `ValueError`s only in that same function that contained the bug. @LysandreJik
02-21-2023 04:08:52
02-21-2023 04:08:52
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21718). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,717
closed
Does generate() support "Export to TorchScript"
### Feature request Does T5 `model.generate` support "Export to TorchScript" https://huggingface.co/docs/transformers/main/en/torchscript I want to make a faster deployment and transform `model.generate` like this: https://discuss.huggingface.co/t/t5-base-not-torchscriptable/11173 thanks all
02-21-2023 03:55:09
02-21-2023 03:55:09
cc @gante <|||||>Hey @gongel 👋 The model foward pass probably can be serialized, but the full `model.generate` cannot. We are working on the serialization of `model.generate` as we speak, in the context of PyTorch dynamo. Can I be of further assistance? :)<|||||>Thank you, I'm looking forward to it<|||||>Is there an issue tracking this? What's the status of `TorchScript`ing `model.generate`? Or can `generate` be a standalone function and call `TorchScript` module?<|||||>@ZisIsNotZis AFAIK no tracking issue. We are exploring generation speedups (which will likely include static shapes, i.e. should be TorchScript compilable) at the moment, to be released in ~3 months.
transformers
21,716
closed
run_mlm.py shows error
### System Info Hi. I'm training bert model with mlm with following command. It seems that the values in attention_mask, token_type_id gets invalid ``` TOKENIZERS_PARALLELISM=false \ NCCL_P2P_DISABLE=1 python3 run_mlm.py \ --model_name_or_path "kykim/bert-kor-base" \ --tokenizer_name "kykim/bert-kor-base" \ --train_file /mnt/STT_lm/korea_addr_50000_numtotext.txt \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 16 \ --do_train \ --do_eval \ --output_dir ./snapshots/test-mlm-50000 \ --overwrite_output_dir \ --dataloader_num_workers 8 \ --max_seq_length 200 #\ # --line_by_line ``` after few batches it throws this.. ``` [INFO|modeling_bert.py:1370] 2023-02-21 03:18:09,285 >> BertForMaskedLM attention_mask tensor([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], device='cuda:1') token_type_ids tensor([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], device='cuda:1') position_ids None head_mask None inputs_embeds None encoder_hidden_states None encoder_attention_mask None output_attentions None output_hidden_states None return_dict True [INFO|modeling_bert.py:1388] 2023-02-21 03:18:09,295 >> prediction_scores: tensor([[0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], ..., [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.], [0., 0., 0., ..., 0., 0., 0.]], device='cuda:1', grad_fn=<ViewBackward0>) labels: tensor([0, 0, 0, ..., 0, 0, 0], device='cuda:1') [INFO|modeling_bert.py:1370] 2023-02-21 03:18:09,305 >> BertForMaskedLM attention_mask tensor([[ 0, 0, 0, ..., 0, 0, 0], [ 0, 0, 0, ..., 0, 0, 0], [ 0, 0, 0, ..., 0, 0, 0], ..., [ 0, 0, 0, ..., 0, 0, 0], [ 0, 0, 0, ..., 0, 0, 0], [ 0, 0, 0, ..., 0, 0, 0]], device='cuda:3') token_type_ids tensor([[139726224884016, 139726226097792, 3254755329, ..., 0, 0, 139726226464001], [ 0, 0, 0, ..., 0, 0, 0], [ 0, 0, 0, ..., 0, 0, 0], ..., [ 0, 0, 0, ..., 0, 0, 0], [ 0, 0, 0, ..., 139726226538304, 106848880, 139761598107536], [139726224884032, 139726226098720, 1, ..., 0, 0, 0]], device='cuda:3') position_ids None head_mask None inputs_embeds None encoder_hidden_states None encoder_attention_mask None output_attentions None output_hidden_states None return_dict True [INFO|modeling_bert.py:1370] 2023-02-21 03:18:09,306 >> BertForMaskedLM attention_mask tensor([[139726092393712, 0, 139726092318512, ..., 139726092393408, 0, 139726092318512], [139726092393104, 0, 139726092318512, ..., 139726092392800, 0, 139726092318512], [139726092392496, 0, 139726092318512, ..., 139726092392192, 0, 139726092318512], ..., [139726092391888, 0, 139726092318512, ..., 139726092391584, 0, 139726092318512], [139726092328960, 0, 139726092318512, ..., 139726092328656, 0, 139726092318512], [139726092328352, 0, 139726092318512, ..., 139726092328048, 0, 139726092318512]], device='cuda:2') token_type_ids tensor([[ 0, 0, 0, ..., 139726092397664, 0, 139726092324144], [139726092397360, 0, 139726092324144, ..., 139726092397056, 0, 139726092324144], [139726092396752, 0, 139726092324144, ..., 139726092329088, 0, 139726092324144], ..., [139726092328784, 0, 139726092324144, ..., 139726092328480, 0, 139726092324144], [139726092328176, 0, 139726092324144, ..., 139726092324144, 106848880, 139761598047072], [139726090666304, 139726092316064, 1, ..., 0, 0, 0]], device='cuda:2') position_ids None head_mask None inputs_embeds None encoder_hidden_states None encoder_attention_mask None output_attentions None output_hidden_states None return_dict True [INFO|modeling_bert.py:1370] 2023-02-21 03:18:09,306 >> BertForMaskedLM attention_mask tensor([[139755215913280, 139755216000544, 1, ..., 0, 0, 0], [139755215913280, 139755216808704, 1, ..., 139755216809376, 106848880, 139761598326640], [139755215913280, 139755216807568, 1, ..., 139755216803312, 0, 139755216809376], ..., [139755215913280, 139755216806672, 1, ..., 139755216797312, 0, 139755216809376], [139755216804336, 0, 139755216809376, ..., 139755215913280, 139755215913280, 1], [139755215913376, 139755216824784, 139764707470048, ..., 139762620122257, 0, 0]], device='cuda:0') token_type_ids tensor([[139761598326800, 0, 0, ..., 139755216815408, 139755215913088, 64], [139755216806224, 139755215913088, 64, ..., 139755216807216, 139755215913088, 64], [139755215913280, 139755216814896, 32, ..., 139755215913280, 139755216819504, 1], ..., [139755215913280, 139755215913280, 1, ..., 139755215913328, 139755215913328, 32], [139755215913376, 139755216830560, 139764707470048, ..., 0, 0, 0], [ 0, 0, 0, ..., 139755215913232, 139755215913232, 0]], device='cuda:0') position_ids None head_mask None inputs_embeds None encoder_hidden_states None encoder_attention_mask None output_attentions None output_hidden_states None return_dict True ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [0,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [3,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [7,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [8,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [10,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [11,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [13,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [15,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [17,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [18,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [19,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [20,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [21,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [22,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [23,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [24,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [25,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [26,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [27,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [28,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [29,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [30,0,0] Assertion `t >= 0 && t < n_classes` failed. ../aten/src/ATen/native/cuda/Loss.cu:242: nll_loss_forward_reduce_cuda_kernel_2d: block: [0,0,0], thread: [31,0,0] Assertion `t >= 0 && t < n_classes` failed. Traceback (most recent call last): File "run_mlm.py", line 645, in <module> main() File "run_mlm.py", line 594, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/transformers/src/transformers/trainer.py", line 1576, in train return inner_training_loop( File "/transformers/src/transformers/trainer.py", line 1843, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/transformers/src/transformers/trainer.py", line 2588, in training_step loss = self.compute_loss(model, inputs) File "/transformers/src/transformers/trainer.py", line 2620, in compute_loss outputs = model(**inputs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py", line 171, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/data_parallel.py", line 181, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/parallel_apply.py", line 89, in parallel_apply output.reraise() File "/usr/local/lib/python3.8/dist-packages/torch/_utils.py", line 543, in reraise raise exception RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/parallel_apply.py", line 64, in _worker output = module(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/transformers/src/transformers/models/bert/modeling_bert.py", line 1384, in forward File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/transformers/src/transformers/models/bert/modeling_bert.py", line 708, in forward prediction_scores = self.predictions(sequence_output) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/transformers/src/transformers/models/bert/modeling_bert.py", line 697, in forward hidden_states = self.transform(hidden_states) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/transformers/src/transformers/models/bert/modeling_bert.py", line 676, in forward hidden_states = self.dense(hidden_states) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasLtMatmul( ltHandle, computeDesc.descriptor(), &alpha_val, mat1_ptr, Adesc.descriptor(), mat2_ptr, Bdesc.descriptor(), &beta_val, result_ptr, Cdesc.descriptor(), result_ptr, Cdesc.descriptor(), &heuristicResult.algo, workspace.data_ptr(), workspaceSize, at::cuda::getCurrentCUDAStream())` ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction TOKENIZERS_PARALLELISM=false \ NCCL_P2P_DISABLE=1 python3 run_mlm.py \ --model_name_or_path "kykim/bert-kor-base" \ --tokenizer_name "kykim/bert-kor-base" \ --train_file /mnt/STT_lm/korea_addr_50000_numtotext.txt \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 16 \ --do_train \ --do_eval \ --output_dir ./snapshots/test-mlm-50000 \ --overwrite_output_dir \ --dataloader_num_workers 8 \ --max_seq_length 200 #\ # --line_by_line ### Expected behavior .
02-21-2023 03:29:23
02-21-2023 03:29:23
Could you please provide the result of `transformers-cli env` as instructed in the template?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,715
closed
Make `CLIPImageProcessor` compatible with `subfolder` kwarg
### Feature request The CLIPImageProcessor can't be initialized currently using: ``` CLIPImageProcessor.from_pretrained('kakaobrain/karlo-v1-alpha-image-variations', subfolder = 'feature_extractor', return_unused_kwargs=True) ``` as the `ImageProcessorMixin.get_image_processor_dict` method doesn't take in a `subfolder` kwarg to pass into `cached_file` Link: https://github.com/huggingface/transformers/blob/main/src/transformers/image_processing_utils.py#L214 Can we add the subfolder kwarg to be able to initialize a CLIPImageProcessor this way ? ### Motivation Currently, other modules like the `CLIPTextModelWithProjection` are able to load from subfolders. It'd be nice to have CLIPImageProcessor also behave the same way. ### Your contribution I'd be happy to open a PR on this. I was able to get it to work by passing `subfolder` directly to `cached_file` but need to verify a few more things.
02-21-2023 03:26:28
02-21-2023 03:26:28
Thanks for flagging! We'll be happy to have a look at a PR!<|||||>@sgugger, I created the PR. #21725 <|||||>Closing the issue as PR is merged and issue is resolved. :)
transformers
21,714
closed
Batch inference of GIT model
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.27 - Python version: 3.8.0 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.9.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: No When I use GIT model to do image caption, it throws an exception : ``` File "/usr/local/lib/python3.8/dist-packages/transformers/models/git/modeling_git.py", line 1272, in forward hidden_states = torch.cat((projected_visual_features, embedding_output), dim=1) RuntimeError: Sizes of tensors must match except in dimension 0. Got 1 and 0 (The offending index is 0) ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction to reproduce this error, use the following core code: ``` processor = AutoProcessor.from_pretrained("microsoft/git-base-coco") model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-coco").to(device) model.eval() ... pixel_values = processor(images=images, return_tensors="pt").pixel_values.to(device) print(pixel_values) # (10, 3, 224, 224) output_ids = model.generate(pixel_values=pixel_values, max_length=50) preds = processor.batch_decode(output_ids, skip_special_tokens=True)[0] ``` I have checked the source file "modeling_git.py" line 1272, ![image](https://user-images.githubusercontent.com/22444830/220228879-cae18b8f-f7f9-4d76-9ef8-f7d715011eba.png) due to **embedding_output.size(0)** is 1 but visual features size(0) is 10, 1//10 = 0, ![image](https://user-images.githubusercontent.com/22444830/220229180-cfee16ac-5e65-4aec-b04e-1ce778268706.png) so these two features cannot be concatenated. ### Expected behavior support batch input
02-21-2023 02:07:10
02-21-2023 02:07:10
cc @amyeroberts and @gante <|||||>Hi @JosephChenHub 👋 We have been working on similar problems since the release of v4.26. Can you please confirm that the problem still exists in `main`? (`pip install --upgrade git+https://github.com/huggingface/transformers.git`)<|||||>> pip install --upgrade git+https://github.com/huggingface/transformers.git Hi @gante, I use this command > Hi @JosephChenHub 👋 We have been working on similar problems since the release of v4.26. Can you please confirm that the problem still exists in `main`? > > (`pip install --upgrade git+https://github.com/huggingface/transformers.git`) Hi @gante , I use the source code of `main` branch (transformers==4.27.0.dev0), the issue still exists. <|||||>Hey @JosephChenHub 👋 Thank you for your confirmation :) I was able to track down the root cause -- #21738 fixes it! After it is merged, you can install from `main` again and it should work 🚀 <|||||>(@JosephChenHub It should be working now, let me know if it is not!)
transformers
21,713
closed
Unable to use BLIP2 with caption_coco_opt6.7b at HEAD via salesforce-lavis (also HEAD)
### System Info working: - `transformers` version: 4.26.1 - Platform: Linux-6.0.12-x86_64-with-glibc2.10 - Python version: 3.8.16 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no broken: - `transformers` version: 4.27.0.dev0 - Platform: Linux-6.0.12-x86_64-with-glibc2.10 - Python version: 3.8.16 - Huggingface_hub version: 0.12.0 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @gante @NielsRogge ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. Start with clean env setup via https://github.com/salesforce/LAVIS/blob/main/requirements.txt (transformers-4.26.1) 2. Run `python test_simple.py`, model is correctly loaded and prints a caption 3. `pip install --upgrade git+https://github.com/huggingface/transformers` (I wanted the new shiny blip2 conversion script so I can conver my finetuned model into HF format) 4. `Resolved https://github.com/huggingface/transformers to commit 8b3db33a763ccef828fca89bac7e6cbff314f131` 5. Run `python test_simple.py` 6. `RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 25 but got size 5 for tensor number 1 in the list.` ```python import torch from lavis.models import load_model_and_preprocess import torch from PIL import Image import requests device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model, vis_processors, _ = load_model_and_preprocess(name="blip2_opt", model_type="caption_coco_opt6.7b", is_eval=True, device=device) url = "..." raw_image = Image.open(requests.get(url, stream=True).raw).convert("RGB") image = vis_processors["eval"](raw_image).unsqueeze(0).to(device) data = model.generate({"image": image}) print(data) ``` ### Expected behavior Can use BLIP2 with latest HF
02-21-2023 00:46:23
02-21-2023 00:46:23
cc @younesbelkada <|||||>Hey @AstraliteHeart 👋 This issue seems to be a duplicate of https://github.com/huggingface/transformers/issues/21599, which is fixed. Can I ask you to try to run your script using `transformers` `main` branch, i.e. after installing with `pip install --upgrade git+https://github.com/huggingface/transformers.git`?<|||||>I don't think this is a duplicate, my env is past that fix (see p4 in the original repro steps), I've updated form `main` to confirm as follows: 1. `pip install --upgrade git+https://github.com/huggingface/transformers.git` 2. `Resolved https://github.com/huggingface/transformers.git to commit bb5a2f2fc30985841289207b9f1f7765d8abc4e0` 3. `python test_simple.py` 4. `RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 25 but got size 5 for tensor number 1 in the list.`<|||||>Thank you for confirming @AstraliteHeart 🤗 I will dig deeper and let you know what I find!<|||||>After some digging, we can see that the exception is raised as follows: ```py │ /home/joao/hf/lib/python3.10/site-packages/lavis/models/blip2_models/modeling_opt.py:703 in │ │ forward │ │ │ │ 700 │ │ │ inputs_embeds = self.embed_tokens(input_ids) │ │ 701 │ │ │ │ 702 │ │ if query_embeds is not None: │ │ ❱ 703 │ │ │ inputs_embeds = torch.cat([query_embeds, inputs_embeds], dim=1) │ │ 704 │ │ │ input_shape = inputs_embeds.size()[:-1] │ │ 705 │ │ │ │ 706 │ │ # embed positions │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 25 but got size 5 for tensor number 1 in the list. ``` From the full stack trace, we can conclude that the error arises from an issue in `lavis`, and not in `transformers` :) Actually, the root cause for this issue is something that we have addressed [on this PR](https://github.com/huggingface/transformers/pull/21405) -- `lavis` has a different implementation, where they have a modified OPT model to handle the image embeddings, where we decided to update `.generate()` to handle soft-prompting. @AstraliteHeart This means you have two options: 1. Update your code to rely on `transformers`, as opposed to `lavis`. See [here](https://huggingface.co/docs/transformers/main/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example) for examples. 2. Open an issue in `lavis`, so they can help you with this issue :) <|||||>@gante thank you for debugging! I can confirm that syncing before https://github.com/huggingface/transformers/pull/21405 (edc1e734bfc01109b8c66881d950ebbda032a6d2) works, I'll open an issue on SF side to warn them about the breakage, unfortunately this brings me to the original issue of trying to use `convert_blip_2_original_to_pytorch.py`, perhaps you can help me figure out how the BLIP2 models were converted? (I understand, this is irrelevant to most users but only a few brave souls who are finetuning BLIP2 via LAVIS but want to then load it in HF.) I've tried both `pip install git+https://github.com/nielsrogge/LAVIS.git@fix_lavis` (mentioned in the script) and `lavis` from HEAD, but I am getting this trace ``` $ python ./convert_blip_2_original_to_pytorch.py Loading original model... Position interpolate from 16x16 to 26x26 tokenizer facebook/opt-6.7b Loading checkpoint shards: Done! Traceback (most recent call last): File "./convert_blip_2_original_to_pytorch.py", line 304, in <module> convert_blip2_checkpoint(args.model_name, args.pytorch_dump_folder_path, args.push_to_hub) File "/.../envs/lavis/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "./convert_blip_2_original_to_pytorch.py", line 216, in convert_blip2_checkpoint original_logits = original_logits.logits AttributeError: 'dict' object has no attribute 'logits' // indeed, this is a dictionary containing only 'loss' ``` what combination of versions of `transformers` and `lavis` was used during conversion? <|||||>Hi, Thanks for converting BLIP2 to HF :) I actually forked the LAVIS repo and made some tweaks to facilitate conversion (I removed a bunch of unnecessary requirements etc). See [here](https://github.com/huggingface/transformers/blob/4446b6b094a7c036d09059885bec679279c9b488/src/transformers/models/blip_2/convert_blip_2_original_to_pytorch.py#L27). <|||||>Hi Niels, thank you for checking this. I did use your fork (or so I thought, sigh), but I redid everything from scratch while comparing traces with code and, well... turned out I moved my blip2 conversion script to LAVIS git root folder which kept including their model (as it's in the `lavis` folder) even with your fixed one being installed (so I do apologies). I can now confirm that with your fork I was able to convert my model with snapshot before https://github.com/huggingface/transformers/pull/21405 and load it it in 8 bits with latest `bitsandbytes` keeping VRAM usage at 11.1GB (vs around 18.5GB without). Do you have any guidance on matching outputs between lavis and hf models? I ran about 50 samples though lavis/hf16/hf8 and while hf16 and hf8 are mostly consistent (good), lavis output is better in all cases. (see anecdotal examples below) Here is roughly how I load and run all models (https://gist.github.com/AstraliteHeart/4d7ebf834021b8e1c9bc439c1633002c) I tried to make sure all settings and rnd seeds are matching, but perhaps I am missing something? https://derpicdn.net/img/view/2023/2/23/3051871.png ``` 'caption_lavis': ['scootaloo, apple bloom, and applejack in a group hug scootaloo, apple bloom, and applejack are all smiling white background', 'scootaloo, applebloom, and applejack in a group hug scootaloo and applebloom are jumping applejack is smiling white background', 'scootaloo, apple bloom, and applejack in a group hug scootaloo, apple bloom, and applejack are jumping and smiling white background'], 'caption_hf_16': ['a series of images of sweetie belle, applejack, scootaloo, applebloom, rarity, pinkie pie, twilight sparkle, rarity, twilight sparkle, rarity, rarity, rarity, rarity, rarity, rarity', 'a series of images of sweetie belle, applejack, scootaloo, applebloom, rarity, pinkie pie, twilight sparkle, rarity, twilight sparkle, twilight sparkle, twilight sparkle, twilight sparkle', 'a series of images of sweetie belle, applejack, scootaloo, applebloom, rarity, pinkie pie, twilight sparkle, rarity, twilight sparkle, twilight sparkle, rarity, rarity, rarity, rarity'], 'caption_hf_8': ['a series of images of sweetie belle, applebloom, scootaloo, applejack, rarity, pinkie pie, twilight sparkle, fluttershy, rarity, pinkie pie, twilight sparkle, rarity, pink', 'a series of images of sweetie belle, applebloom, scootaloo, applejack, rarity, pinkie pie, twilight sparkle, fluttershy, rarity, pinkie pie, twilight sparkle, twilight sparkle', 'a series of images of sweetie belle, applebloom, scootaloo, applejack, rarity, pinkie pie, twilight sparkle, fluttershy, rarity, twilight sparkle, twilight sparkle, twilight sparkle'] ``` https://derpicdn.net/img/2017/7/7/1480500/large.png ``` 'caption_lavis': ['alicorn twilight sparkle is laying on her back with a book on her head and a book on her chest she is surrounded by books on the floor and on the walls she has a book on her head and a book on her chest she is', 'alicorn twilight sparkle is laying on her back with a book on her head and a book on her chest she is surrounded by books on the floor and on the walls she is also wearing a book on her head and a book on her chest', 'alicorn twilight sparkle is laying on her back with a book on her head and a book on her chest she is surrounded by books on the floor and on the walls she has a book on her head and a book on her chest she has'], 'caption_hf_16': ['posterior view of twilight sparkle lying on the floor surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by', 'posterior view of twilight sparkle lying on the floor surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books\n', 'posterior view of twilight sparkle lying on the floor surrounded by a pile of books, surrounded by a pile of books, surrounded by books, surrounded by books, surrounded by books, surrounded by books, surrounded by books, surrounded by books'], 'caption_hf_8': ['twilight sparkle is lying on the floor surrounded by a pile of books she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor she','twilight sparkle is lying on the floor surrounded by a pile of books she is surrounded by a pile of books on top of her she is surrounded by a pile of books on top of her she is surrounded by a pile of books on top', 'twilight sparkle is lying on the floor surrounded by a pile of books she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor her'] ```<|||||>Thanks for reporting, that should not be the case! I extensively tested the greedy/beam search outputs on original vs my implementation to make sure everything works as expected. But the generate method has had some updates now so there might be a small issue. However isn't it weird that the first token is already different? cc'ing @gante here<|||||>Also I'm not sure you can run both LAVIS and Transformers main branch in the same environment to compare, cause LAVIS relies on an older version of Transformers<|||||>Results on top are from `transformers` https://gist.github.com/AstraliteHeart/4d7ebf834021b8e1c9bc439c1633002c + your fork of `lavis`. Some more tests (tldr, latest transformers still do not produce the same output) Official `lavis` repo: ``` ['scootaloo, apple bloom, and applejack in a group hug scootaloo, apple bloom, and applejack are all smiling white background', 'scootaloo, applebloom, and applejack in a group hug scootaloo and applebloom are jumping applejack is smiling white background', 'scootaloo, apple bloom, and applejack in a group hug scootaloo, apple bloom, and applejack are jumping and smiling white background'] ``` ``` ['alicorn twilight sparkle is laying on her back with a book on her head and a book on her chest she is surrounded by books on the floor and on the walls she has a book on her head and a book on her chest she is', 'alicorn twilight sparkle is laying on her back with a book on her head and a book on her chest she is surrounded by books on the floor and on the walls she is also wearing a book on her head and a book on her chest', 'alicorn twilight sparkle is laying on her back with a book on her head and a book on her chest she is surrounded by books on the floor and on the walls she has a book on her head and a book on her chest she has'] ``` Latest transformers: ``` 'caption_hf_16': [ 'a series of images of sweetie belle, applejack, scootaloo, applebloom, rarity, pinkie pie, twilight sparkle, rarity, twilight sparkle, rarity, rarity, rarity, rarity, rarity, rarity', 'a series of images of sweetie belle, applejack, scootaloo, applebloom, rarity, pinkie pie, twilight sparkle, rarity, twilight sparkle, twilight sparkle, twilight sparkle, twilight sparkle', 'a series of images of sweetie belle, applejack, scootaloo, applebloom, rarity, pinkie pie, twilight sparkle, rarity, twilight sparkle, twilight sparkle, rarity, rarity, rarity, rarity' ], 'caption_hf_8': [ 'a series of images of sweetie belle, applebloom, scootaloo, applejack, rarity, pinkie pie, twilight sparkle, fluttershy, rarity, pinkie pie, twilight sparkle, rarity, pink', 'a series of images of sweetie belle, applebloom, scootaloo, applejack, rarity, pinkie pie, twilight sparkle, fluttershy, rarity, pinkie pie, twilight sparkle, twilight sparkle', 'a series of images of sweetie belle, applebloom, scootaloo, applejack, rarity, pinkie pie, twilight sparkle, fluttershy, rarity, twilight sparkle, twilight sparkle, twilight sparkle' ] ``` ``` caption_hf_16': [ 'posterior view of twilight sparkle lying on the floor surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by', 'posterior view of twilight sparkle lying on the floor surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books, surrounded by a pile of books\n', 'posterior view of twilight sparkle lying on the floor surrounded by a pile of books, surrounded by a pile of books, surrounded by books, surrounded by books, surrounded by books, surrounded by books, surrounded by books, surrounded by books' ], 'caption_hf_8': [ 'twilight sparkle is lying on the floor surrounded by a pile of books she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor she', 'twilight sparkle is lying on the floor surrounded by a pile of books she is surrounded by a pile of books on top of her she is surrounded by a pile of books on top of her she is surrounded by a pile of books on top', 'twilight sparkle is lying on the floor surrounded by a pile of books she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor she is surrounded by a pile of books on the floor her' ] ```<|||||>Hey @AstraliteHeart 👋 Differences in generation can be explained by many parts of the stack, from ninja numerical bugs to intentional implementation quirks. Debugging the exact cause takes time, so I want to ask for your help :D 1. Can you confirm that both `lavis` and `transformers` are recent versions? (latest release or newer) 2. Comparing results with sampling is impossible, as minor changes like the order of operations will produce different results. Have you confirmed that the results are different without sampling? (you can ensure that it is not sampling if you are not setting seeds and you're still getting the same outputs) 3. (If the answers to the questions above are positive) Can you please share a gist like the one you shared above, except without reliance on local data? It would help me get started 🤗 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> u have any guidance on matching outputs between lav Can you please help how you managed to convert this? I am also stuck is there any specific transformers version?<|||||>I have a PR here which aims to further verify equivalence: https://github.com/huggingface/transformers/pull/24854. The conversion script can be found [here](https://github.com/NielsRogge/transformers/blob/improve_blip2/src/transformers/models/blip_2/convert_blip_2_original_to_pytorch.py) and can be run as follows: ``` pip install -U git+https://github.com/nielsrogge/LAVIS.git@blip2_float32 git clone -b improve_blip2 git+https://github.com/nielsrogge/transformers.git cd transformers python src/transformers/models/blip_2/convert_blip_2_original_to_pytorch.py --model_name "blip2-flan-t5-xl" ``` The reason I forked LAVIS is to make sure I can compare both implementations using float32.
transformers
21,712
closed
Transformers version 4.27.0.dev0
### System Info Hello, I have a question. I would like to upgrade Transformers to 4.27 because I get the following error when I run run_mlm.py. The latest is 4.26.1 in pip install. -------------------------------------------------------------------------- python run_mlm.py --model_type bert Traceback (most recent call last): File "C:\Users\d_test_user\Documents\test\transformers-main\examples\pytorch\language-modeling\run_mlm.py", line 56, in <module> check_min_version("4.27.0.dev0") File "C:\Users\d_test_user\Documents\transformer_example\.env\lib\site-packages\transformers\utils\__init__.py", line 208, in check_min_version raise ImportError( ImportError: This example requires a source install from HuggingFace Transformers (see `https://huggingface.co/transformers/installation.html#installing-from-source`), but the version found is 4.26.1. Check out https://huggingface.co/transformers/examples.html for the examples corresponding to other versions of HuggingFace Transformers. -------------------------------------------------------------------------- Thanks for your help! ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction python run_mlm.py Traceback (most recent call last): File "C:\Users\d_test_user\Documents\test\transformers-main\examples\pytorch\language-modeling\run_mlm.py", line 56, in <module> check_min_version("4.27.0.dev0") File "C:\Users\d_test_user\Documents\transformer_example\.env\lib\site-packages\transformers\utils\__init__.py", line 208, in check_min_version raise ImportError( ImportError: This example requires a source install from HuggingFace Transformers (see `https://huggingface.co/transformers/installation.html#installing-from-source`), but the version found is 4.26.1. Check out https://huggingface.co/transformers/examples.html for the examples corresponding to other versions of HuggingFace Transformers. ### Expected behavior I want to create a bert model using example.
02-21-2023 00:10:03
02-21-2023 00:10:03
This problem has been resolved.<|||||>Can you share how to solve this issue? Thank you in advance. <|||||>Hi @kksj216, Use the following steps git clone https://github.com/huggingface/transformers.git cd transformers pip install -e .<|||||>@tanmey007 Thanks for your help! But that steps did not work for me... Still had the same error. 🥲 <|||||>> @tanmey007 Thanks for your help! But that steps did not work for me... Still had the same error. 🥲 I am using jupyter notebook, and used the following steps, !git clone https://github.com/huggingface/transformers.git import os os.chdir('transformers') !pip install -e . <|||||>You can either install directly from the `main` branch: https://huggingface.co/docs/transformers/installation#install-from-source Or through an editable install: https://huggingface.co/docs/transformers/installation#editable-install<|||||>Now it works in jupyter notebook. I really appreciate all your help!! :) But it still doesn't work in the terminal, is there any reason? Or are there other things I need to set up? <|||||>@kksj216 If it is not working in the terminal, it's likely the environment has not been updated and the correct version of transformers is not being used. To check which version of transformers is being run in the environment, you can run in the terminal: `python -c "import transformers; print(transformers.__version__)"` Or to see more information about the library, where it's installed etc: `pip show transformers` If you wish to run from the development branch, then the instructions @sanchit-gandhi or @tanmey007 posted should be followed. If this doesn't work, then I suggest uninstalling `transformers` from your environment and then try installing again. <|||||>> @kksj216 If it is not working in the terminal, it's likely the environment has not been updated and the correct version of transformers is not being used. > > To check which version of transformers is being run in the environment, you can run in the terminal: `python -c "import transformers; print(transformers.__version__)"` > > Or to see more information about the library, where it's installed etc: `pip show transformers` > > If you wish to run from the development branch, then the instructions @sanchit-gandhi or @tanmey007 posted should be followed. If this doesn't work, then I suggest uninstalling `transformers` from your environment and then try installing again. Thank you for your kind help! I solved the issue now :)
transformers
21,711
closed
Using run_mlm.py to pretrain a roberta base model from scratch outputs do not include <bos> or <eos> tokens
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.27.0.dev0 - Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: deepspeed ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction I am attempting to train a roberta-base model using the defaults on a custom corpus. deepspeed --num_gpus 8 run_mlm.py --model_type roberta --max_seq_length 128 --do_train --per_device_train_batch_size 512 --fp16 --save_total_limit 3 --num_train_epochs 30 --deepspeed ds_config.json --learning_rate 1e-4 --eval_steps 50 --max_eval_samples 4000 --evaluation_strategy steps --tokenizer "roberta-large" --warmup_steps 30000 --adam_beta1 0.9 --adam_beta2 0.98 --adam_epsilon 1e-6 --weight_decay 0.01 --lr_scheduler_type linear --preprocessing_num_workers 8 --train_file my_text.txt --line_by_line --output_dir my_roberta_base The training works and the loss goes down and the accuracy goes up. However, when I compare the outputs to the original roberta-base I see a behavior that appears to be a glitch or problem with the training. ### Expected behavior Expected behavior using roberta-base from huggingface hub shows the first and last token of the output being the `<bos>` and `<eos>` tokens, respectively, while my new trained roberta-base model is showing token #8 ( and). I think this was learned instead of being automatically set to <bos> and <eos> like the expected behavior should be for this script. ```python from transformers import AutoTokenizer, AutoModelForMaskedLM import torch tokenizer = AutoTokenizer.from_pretrained("roberta-base") model1 = AutoModelForMaskedLM.from_pretrained("roberta-base", torch_dtype=torch.float16).cuda(0) model2 = AutoModelForMaskedLM.from_pretrained("rob_wiki_base", torch_dtype=torch.float16).cuda(0) text="The main causes of death for <mask> are human-related issues, such as habitat destruction and human objects. Their slow-moving, curious <mask> has led to violent collisions with propeller-driven boats and ships. Some manatees have been found with over 50 scars on them from propeller <mask>. Natural causes of death include adverse temperatures, predation by <mask> on young, and disease." input = tokenizer(text, truncation=True, padding=True, return_tensors="pt") output1=model1(input["input_ids"].cuda(0)) output2 = model2(input["input_ids"].cuda(0)) predicted_token_id1 = output1[0][0].argmax(axis=-1) predicted_token_id2 = output2[0][0].argmax(axis=-1) print("Original roberta-base output:") print(predicted_token_id1) print(tokenizer.decode(predicted_token_id1)) print("-"*20) print("My new roberta-base output:") print(predicted_token_id2) print(tokenizer.decode(predicted_token_id2)) print("-"*20) ``` Original roberta-base output: tensor([ 0, 133, 1049, 4685, 9, 744, 13, 18018, 32, 1050, 12, 3368, 743, 6, 215, 25, 14294, 8181, 8, 1050, 8720, 4, 2667, 2635, 12, 19838, 6, 10691, 3650, 34, 669, 7, 4153, 25062, 19, 39238, 12853, 12, 9756, 8934, 8, 7446, 4, 993, 313, 877, 293, 33, 57, 303, 19, 81, 654, 26172, 15, 106, 31, 39238, 12853, 5315, 4, 7278, 4685, 9, 744, 680, 12661, 3971, 6, 12574, 1258, 30, 22139, 15, 664, 6, 8, 2199, 4, 2], device='cuda:0') <s>The main causes of death for whales are human-related issues, such as habitat destruction and human objects. Their slow-moving, curious behavior has led to violent collisions with propeller-driven boats and ships. Some manatees have been found with over 50 scars on them from propeller strikes. Natural causes of death include adverse temperatures, predation by predators on young, and disease.</s> My new roberta-base output: tensor([ 8, 133, 1049, 4685, 9, 744, 13, 5868, 32, 1050, 12, 3368, 743, 6, 215, 25, 14294, 8181, 8, 1050, 8720, 4, 2667, 2635, 12, 19838, 6, 10691, 2574, 34, 669, 7, 4153, 25062, 19, 39238, 12853, 12, 9756, 8934, 8, 7446, 4, 993, 313, 877, 293, 33, 57, 303, 19, 81, 654, 26172, 15, 106, 31, 39238, 12853, 5315, 4, 7278, 4685, 9, 744, 680, 12661, 3971, 6, 12574, 1258, 30, 5868, 15, 664, 6, 8, 2199, 4, 8], device='cuda:0') andThe main causes of death for humans are human-related issues, such as habitat destruction and human objects. Their slow-moving, curious nature has led to violent collisions with propeller-driven boats and ships. Some manatees have been found with over 50 scars on them from propeller strikes. Natural causes of death include adverse temperatures, predation by humans on young, and disease. and
02-20-2023 23:38:57
02-20-2023 23:38:57
The model config.json have a notable difference between the roberta-base and my new pretrained roberta model. max_position_embeddings in roberta-base is equal to 514, while in my new pretrained model it is set to 512. I also notice in the script there is a default setting to "mask special tokens" We use this option because DataCollatorForLanguageModeling (see below) is more efficient when it receives the `special_tokens_mask`. return_special_tokens_mask=True, Is it possible that this is the source of the issue? Thank you for any help that can be offered on this problem.<|||||>cc @ArthurZucker and @younesbelkada <|||||>Any updates on this? Would appreciate any help to identify the source of this bug.<|||||>Hey, this should probably be aske on the [`forum`](https://discuss.huggingface.co/) as it is not a bug and there we can reproduce your issue (the model is private). 1. The training might have gone wrong. 2. The `generation_config` or `config` file might be wrong. Both your `bos_token` and `eos_token` are wrong 0, and 2 changed to 8. If you can check the `eos` and `pad` and `bos` token arguments and try to make sure that the inputs that you feed to the model are the same, would be great. Also be careful with the formating of your issue, it is very hard to read. If you want an answer fast, this plays a bit against you 😉 <|||||>Maybe there is some misunderstanding in what I posted. To the best of my knowledge I am using an unmodified, default training script from huggingface on a plain text file using the default configuration for roberta (a model that has been on HF for 2 years or more I think). I did a fresh install from source of transformers on a 8x A100 instance. see here: https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py I ran the script using only default configuration commands (they are posted above) on a text file using the default roberta configuration, but the outputs are never the correct 0 and 2. Any configuration I am using is automatically generated by the training script and then I am running the generation script exactly the same as I do with roberta-base, but substituting the model directory generated by the run_mlm.py script. If I am running the script with all default parameters, I think it qualifies as a bug? <|||||>Okay! Thanks for clarifying, I will have a look as soon as I can. It seems like a bug indeed<|||||>The troubleshooting I did myself on this makes me think it has something to do with the special tokens being attention masked in the training dataset preparation. Normally masking special tokens makes sense for some language models (like the `<pad>` token), but I think in this case for the BOS/EOS you don't want them masked. The reason token 8 is showing up in those positions is because the word "and" is extremely common and I think it minimizes overall loss by putting that token. It was never configured to use token 8 (early on in the training it would be a random token like period "." or "the" or "and". ). Overall the model is still training and working well, its just not ever generating the EOS/BOS token in the "unmasked" output.<|||||>Ok, that's fairly interesting. Normally when generating, the bos token should be `forced` via the logits processor. So if you generate using `model.generate` I am guessing that this won't happen even if you have the `masks`. It is okay if there tokens are attention masked, I think they should always be forced (during training for example, the decoder input ids should always start with the `bos` so that it is not predicted, and then the loss is not computed on it. Does that make sense? <|||||>The roberta-base and roberta-large models on huggingface when used with `model.generate` does properly create the BOS/EOS tokens. The output from my checkpoints inserts an extra first and last token, but the token is not BOS/EOS and appears to be learned. <|||||>Is there any update about this issue, I'm facing the same error? @ArthurZucker <|||||>> The troubleshooting I did myself on this makes me think it has something to do with the special tokens being attention masked in the training dataset preparation. Normally masking special tokens makes sense for some language models (like the <pad> token), but I think in this case for the BOS/EOS you don't want them masked. The reason token 8 is showing up in those positions is because the word "and" is extremely common and I think it minimizes overall loss by putting that token. It was never configured to use token 8 (early on in the training it would be a random token like period "." or "the" or "and". ). Overall the model is still training and working well, its just not ever generating the EOS/BOS token in the "unmasked" output. So regarding the lead posted here by @Rallio67, I think I agree with him: - The special tokens should not be masked when computing the loss : the reason behind this that if you want the model to learn that it has to predict the `eos` and `bos` token when computing the loss, you should not mask them. This is visible as the model ends up learning to predict the most common words at the beginning and end, instead of predicting the bos and eos. I suggest trying out without the special mask, and if it works for you I'll try to find a fix that does not remove backward compatibility! <|||||>![Screenshot 2023-04-04 at 5 13 48 PM](https://user-images.githubusercontent.com/17705073/229838857-88c6ff16-4c95-4178-ba2c-6b93d69c701f.png) Training without special tokens also doesn't work, not sure what is the reason then<|||||>Without special tokens or without special masks? <|||||>I trained it with return_special_tokens_mask=False, but only for 3 epochs (is it possible that when I train it fully it's able to learn) ?<|||||>Yep, if you can would be great to see after the same amount of training as the model that raised the issue.<|||||>I trained the model for 75 epochs, still <bos> and <eos> tokens are not appearing<|||||>Hey! I won't really have time to dive deep into this one, If you could share some example inputs that are fed to the model (forgot to ask for the context of `my_text.txt`, but if the tokenizer does not pass bos and eos (by that I mean does not add them) it might be either the default roberta tokenizer that can't be used out of the box for this or something else. <|||||>Okay, here is a very relevant comment : https://github.com/huggingface/transformers/issues/22794#issuecomment-1598977285, it is important to make sure that when the script calls `torch_mask_tokens`, the loss is only computed on the masked tokens (and since there is a call to `masked_fill_(special_tokens_mask, value=0.0)`, which creates the probability of masking special tokens, setting is to `0`. This means that the next call: ```python probability_matrix.masked_fill_(special_tokens_mask, value=0.0) masked_indices = torch.bernoulli(probability_matrix).bool() labels[~masked_indices] = -100 # We only compute loss on masked tokens ``` will set the labels for `eos` and `bos` to `-100` always ignoring them. If you remove the special tokens mask, it is automatically created using `get_special_tokens_mask` which is why the tokens are not learned either. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,710
closed
Fix TVLT (torch device issue)
# What does this PR do? Just a few fixes for TVLT (torch device issue).
02-20-2023 20:50:24
02-20-2023 20:50:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,709
closed
Fix `get_class_in_module`
# What does this PR do? The PR #21646 added a line `subprocess.run(["python", "-c", cmd])`. But in our daily CI (docker env.), `python` binary doesn't exist, only `python3` binary exists, and this causes `FileNotFoundError: [Errno 2] No such file or directory: 'python'`. This PRs adds `try ... except...` to avoid this failure, but it's really ugly. See my comment in this PR changes.
02-20-2023 20:13:49
02-20-2023 20:13:49
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,708
closed
Auto api Value Error addition to Troubleshoot
Given the existence of "exotic" models that don't have a mapping to auto classes, this PR adds a small section at the end of the Troubleshooting guide about the error raised when trying to load a model with Auto API when there's no mapping.
02-20-2023 19:48:06
02-20-2023 19:48:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,707
closed
[`Blip2`] Fix Blip-2 multi gpu
# What does this PR do? This PR should fix all the issues related to BLIP-2 and multi-gpu Before this PR, BLIP2 had incorrect `set_input_embeddings` and `get_input_embeddings` functions, leading to unexpected behaviours when using it with `accelerate` since it `accelerate` gets confused when creating a device map with incorrect tied weights. Do not merge before I figure out why this does not fix the behaviour with `blip2-flan-t5`. EDIT: should work properly now, there are some corner cases that users can encounter but if they follow strictly the guidelines presented in https://github.com/huggingface/blog/blob/main/accelerate-large-models.md , it should be fine. Fixes: https://github.com/TimDettmers/bitsandbytes/issues/153 & https://github.com/huggingface/transformers/pull/21441#issuecomment-1435370577 cc @sgugger
02-20-2023 17:46:19
02-20-2023 17:46:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the suggestions! Adapted the code accordingly However, there seems to be something off with the current `accelerate` integration w `blip2` - here since we are calling `generate` of a module of `Blip2ForConditionalGeneration`, I think that we are facing an edge case that I am not sure whether the fix should be here or on `accelerate` I think there are 2 challenging points 1- if a users uses a device_map such as `auto` or `balanced` sometimes the `language_model` attribute will not have any `_hf_hook` attribute (from what I have got only the modules that are on the device_map and the parent module will have an `_hf_hook`), leading to the output of `language_model` not being on the correct device (as the output device will be determined by `language_model.lm_head._hf_hook`). 2- If a well-educated user passes a custom device_map such as: ```python device_map = { "query_tokens": 0, "vision_model":0, "language_model": 1, "language_projection": 0, "qformer": 0, } ``` the `language_model` attribute will indeed have a `_hf_hook`, however the output of the module will not be set to the correct device as per my understanding, `_hf_hook.io_same_device` is set to `True` only on the parent class. I proposed a hacky solution at e0104f0 but I am not happy with it. So I think that maybe a fix should be upstreamed on `accelerate` to enable some child modules behave as the parent module (i.e. have the same `_hf_hook` behaviour), or maybe on `generate` to allow output on a different device that the input, meaning that we add multiple `.to` in `sample`, `greedy`, etc. I am not sure what is the best solution here and I am sure that there is something simpler we can try! Here is a script to reproduce the initial bug: ```python import torch from transformers import Blip2ForConditionalGeneration, Blip2Processor from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from PIL import Image import requests url = "https://huggingface.co/hf-internal-testing/blip-test-image/resolve/main/demo.jpg" image = Image.open(requests.get(url, stream=True).raw) model_t5 = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small", device_map="balanced", torch_dtype=torch.float16) tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small") processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl") device_map = { "query_tokens": 0, "vision_model":0, "language_model": 1, "language_projection": 0, "qformer": 0, } # can also try with custom device_map model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", device_map="balanced", torch_dtype=torch.float16) inputs = processor(images=image, return_tensors="pt").to(0, torch.float16) print(model_t5._hf_hook.execution_device) print(model._hf_hook.execution_device) print(model.language_model._hf_hook.execution_device) # this will fail predictions = model.generate(**inputs, do_sample=True) generated_text = processor.decode(predictions[0], skip_special_tokens=True) print(generated_text) ```<|||||>I can confirm the current implementation works is a users passes a `device_map` that has `language_model` , let me know if you see anything else that needs to be addressed!<|||||>Thank you very much @akkikiki for the very useful feedback! So just to summarize, we need: 1- A more explicit warning pointing to the links you have shared so that users can understand how to use correct `device_map` 2- The fix you proposed in https://github.com/huggingface/transformers/pull/21707#discussion_r1119275324 to make it work for some edge cases where the masks are spread across different devices in the case when n_gpus > 2 (I can only test on a enviornment where I have 2 GPUs for now) Is that correct? <|||||>> Thank you very much @akkikiki for the very useful feedback! So just to summarize, we need: 1- A more explicit warning pointing to the links you have shared so that users can understand how to use correct `device_map` 2- The fix you proposed in [#21707 (comment)](https://github.com/huggingface/transformers/pull/21707#discussion_r1119275324) to make it work for some edge cases where the masks are spread across different devices in the case when n_gpus > 2 (I can only test on a enviornment where I have 2 GPUs for now) Is that correct? Exactly! But the first one is just a suggestion so feel free to discard it :)<|||||>Thanks a mile @akkikiki , may I ask you to run the latest changes that I made on your side to confirm everything works as expected? Then we can merge I think! <|||||>Re-installed your latest branch and works perfectly fine! Thanks a lot @younesbelkada!!<|||||>Hi, I'm still getting errors with this weirdly enough (am on 4.29.0 which I believe should have this fix included). I'm running the code from the model tutorial copy pasted: ```python import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto", ) img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` which gives ``` /proj/vondrick4/sachit/miniconda3/envs/pyt13/lib/python3.10/site-packages/torch/utils/_contextli │ │ b.py:115 in decorate_context │ │ │ │ 112 │ @functools.wraps(func) │ │ 113 │ def decorate_context(*args, **kwargs): │ │ 114 │ │ with ctx_factory(): │ │ ❱ 115 │ │ │ return func(*args, **kwargs) │ │ 116 │ │ │ 117 │ return decorate_context │ │ 118 │ │ │ │ /proj/vondrick4/sachit/miniconda3/envs/pyt13/lib/python3.10/site-packages/transformers/models/bl │ │ ip_2/modeling_blip_2.py:1854 in generate │ │ │ │ 1851 │ │ inputs_embeds = self.get_input_embeddings()(input_ids) │ │ 1852 │ │ inputs_embeds = torch.cat([language_model_inputs, inputs_embeds.to(language_mode │ │ 1853 │ │ │ │ ❱ 1854 │ │ outputs = self.language_model.generate( │ │ 1855 │ │ │ inputs_embeds=inputs_embeds, │ │ 1856 │ │ │ attention_mask=attention_mask, │ │ 1857 │ │ │ **generate_kwargs, │ │ │ │ /proj/vondrick4/sachit/miniconda3/envs/pyt13/lib/python3.10/site-packages/torch/utils/_contextli │ │ b.py:115 in decorate_context │ │ │ │ 112 │ @functools.wraps(func) │ │ 113 │ def decorate_context(*args, **kwargs): │ │ 114 │ │ with ctx_factory(): │ │ ❱ 115 │ │ │ return func(*args, **kwargs) │ │ 116 │ │ │ 117 │ return decorate_context │ │ 118 │ │ │ │ /proj/vondrick4/sachit/miniconda3/envs/pyt13/lib/python3.10/site-packages/transformers/generatio │ │ n/utils.py:1515 in generate │ │ │ │ 1512 │ │ │ │ ) │ │ 1513 │ │ │ │ │ 1514 │ │ │ # 11. run greedy search │ │ ❱ 1515 │ │ │ return self.greedy_search( │ │ 1516 │ │ │ │ input_ids, │ │ 1517 │ │ │ │ logits_processor=logits_processor, │ │ 1518 │ │ │ │ stopping_criteria=stopping_criteria, │ │ │ │ /proj/vondrick4/sachit/miniconda3/envs/pyt13/lib/python3.10/site-packages/transformers/generatio │ │ n/utils.py:2372 in greedy_search │ │ │ │ 2369 │ │ │ if eos_token_id is not None: │ │ 2370 │ │ │ │ if pad_token_id is None: │ │ 2371 │ │ │ │ │ raise ValueError("If `eos_token_id` is defined, make sure that `pad_ │ │ ❱ 2372 │ │ │ │ next_tokens = next_tokens * unfinished_sequences + pad_token_id * (1 - u │ │ 2373 │ │ │ │ │ 2374 │ │ │ # update generated ids, model inputs, and length for next step │ │ 2375 │ │ │ input_ids = torch.cat([input_ids, next_tokens[:, None]], dim=-1) │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:0! ``` Any ideas?<|||||>@sachit-menon The issue is that `device_map=auto` (at least for the recent HF versions that I touched) places the language model head at the later in GPUs (in this case `cuda:7`), but the original inputs are placed in the first GPU i.e., `cuda:0` Can you run ``` from accelerate import init_empty_weights, infer_auto_device_map from transformers import Blip2Processor, Blip2ForConditionalGeneration with init_empty_weights(): model = Blip2ForConditionalGeneration(config) device_map = infer_auto_device_map(model, no_split_module_classes=["T5Block"]) device_map['language_model.lm_head'] = device_map["language_model.decoder.embed_tokens"] # to make the genearted tokens and input_ids to be on the same device ``` and use the created `device_map` and set it as follows ``` model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map=device_map, ) ``` instead of `device_map="auto"` when initializing the model? <|||||>Thanks for the quick response. Now I get in the last line of the first block: `KeyError: 'language_model.decoder.embed_tokens'` <|||||>@sachit-menon Ops, sorry. Try out ``` max_memory={i: "10GiB" for i in range(8)} config = Blip2Config.from_pretrained(model_id) with init_empty_weights(): model = Blip2ForConditionalGeneration(config) device_map = infer_auto_device_map(model, no_split_module_classes=["T5Block"], dtype=torch.float16, max_memory=max_memory) device_map['language_model.lm_head'] = device_map["language_model.encoder.embed_tokens"] ``` or tweak `10GiB` to be adjusted to your GPU memory you have (the above worked with 16GB GPU).<|||||>is this supposed to work on t5-large? ``` config = AutoConfig.from_pretrained("t5-large") with init_empty_weights(): model = AutoModelForSeq2SeqLM.from_config(config) device_map = infer_auto_device_map(model, no_split_module_classes=["T5Block"], max_memory={i:1 for i in range(4)}) print(device_map) device_map['lm_head'] = device_map['encoder.embed_tokens'] self.model = AutoModelForSeq2SeqLM.from_pretrained(self.hparams.model_name_or_path, load_in_8bit=True, device_map='auto', cache_dir="model_cache2") ``` leads to this error in the training step: ``` File "/home/ec2-user/anaconda3/envs/JupyterSystemEnv/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 260, in forward return self.weight * hidden_states RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! ```<|||||>@cassianlewis Yes, replace `self.model = AutoModelForSeq2SeqLM.from_pretrained(self.hparams.model_name_or_path, load_in_8bit=True, device_map='auto', cache_dir="model_cache2")` with `self.model = AutoModelForSeq2SeqLM.from_pretrained(self.hparams.model_name_or_path, load_in_8bit=True, device_map=device_map cache_dir="model_cache2")`<|||||>Hey @akkikiki Sorry, that was a typo in my original comment - I already tried using `device_map = device_map` I tried `device_map['lm_head'] = device_map['encoder.embed_tokens']` and separately`device_map['lm_head'] = device_map['decoder.embed_tokens']` as suggested in https://github.com/akkikiki/huggingface_examples/blob/main/examples/load_flan_ul2.py None of this worked...
transformers
21,706
closed
Bloom's hidden states are None
Hi everyone, I used BloomForCausalLM which was pretrained and released by BigScience. I then forwarded input tensors to the model, and I tried to get the corresponding hidden states. However, the error was raised, which said that hidden states are None. Has anyone encountered the same problem. Thank you very much. https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/bloom/modeling_bloom.py#L935
02-20-2023 16:30:50
02-20-2023 16:30:50
I found I had not checked the code carefully. This problem could be solved by passing the argument`output_hidden_states = True` to the `forward`
transformers
21,705
closed
a bug in transformers.GenerationMixin
### System Info transformers 4.27.0. In the greedy_search method of the GenerationMixin class, the unfinished_sequences variable at line 2177 should be placed inside the body of the 'while' loop so that it gets longer and longer with the decoding, Otherwise it is always a value rather than a sequence. @gante @gante ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction while True: unfinished_sequences = input_ids.new(input_ids.shape[0]).fill_(1) if synced_gpus: pass ba la ba la ... ### Expected behavior Fix as soon as possible
02-20-2023 15:38:46
02-20-2023 15:38:46
Hey @lixinliu1995 👋 I'm not sure I follow, `unfinished_sequences` holds a boolean for each sentence and is updated each iteration on L2247 (if a sentence contains any of the eos tokens -> set to false) How would you expect it to behave?<|||||>yeah, you are right. It is my mistake.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,704
closed
Adding task guides to resources
Previously, we have added links to compatible models to task guides. This PR enables navigation in the opposite direction and adds links to relevant task guides (based on model mapping) to the list of resources in model docs. Those who land on the model docs should now be able to find relevant task guides quicker. The links are added to the list of resources (along with previously listed notebooks, blog posts, scripts, etc.)
02-20-2023 15:13:56
02-20-2023 15:13:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,703
closed
Fix typo in `PROCESSOR_MAPPING_NAMES` and add tests
# What does this PR do? Fix typo in `PROCESSOR_MAPPING_NAMES` and add a new repo check: we check all names in auto (name) mapping are defined in the library. The effect of this PR: (if `GITProcessor` is not fixed) ```bash Checking all names in auto name mappings are defined. Traceback (most recent call last): File "C:\Users\33611\Desktop\Project\transformers-hf-gcp\utils\check_repo.py", line 922, in <module> check_repo_quality() File "C:\Users\33611\Desktop\Project\transformers-hf-gcp\utils\check_repo.py", line 918, in check_repo_quality check_all_auto_object_names_being_defined() File "C:\Users\33611\Desktop\Project\transformers-hf-gcp\utils\check_repo.py", line 640, in check_all_auto_object_names_being_defined raise Exception(f"There were {len(failures)} failures:\n" + "\n".join(failures)) Exception: There were 1 failures: `GITProcessor` appears in the mapping `PROCESSOR_MAPPING_NAMES` but it is not defined in the library. ```
02-20-2023 14:10:40
02-20-2023 14:10:40
_The documentation is not available anymore as the PR was closed or merged._<|||||>Failing test is irrelevant
transformers
21,702
closed
[SpeechT5HifiGan] Handle batched inputs
# What does this PR do? Modifies the SpeechT5 HiFiGAN model to accept batched inputs. This PR is **not** a breaking change: * If the spectrogram inputs are un-batched, the waveform outputs are un-batched (as before) * If the spectrogram inputs are batched, the waveform outputs are batched (new) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-20-2023 13:58:06
02-20-2023 13:58:06
_The documentation is not available anymore as the PR was closed or merged._<|||||>(failing test is unrelated)
transformers
21,701
closed
remove position ids and token type ids from forward args in docstring
# What does this PR do? Fixes #21567 which indicated that the docstring for the GPTNeoX model dose not match with the forwards' arguments. Removed `position_ids` and `token_type_ids`
02-20-2023 13:15:52
02-20-2023 13:15:52
_The documentation is not available anymore as the PR was closed or merged._<|||||>Applies to all the heads, that's why I removed it from the `GPT_NEOX_INPUTS_DOCSTRING` ! Or do you mean other models?
transformers
21,700
closed
Respect documentation on passive log level
# What does this PR do? The documentation states that setting a `log_level` to `"passive"` in the training arguments won't touch the log level, but this is not the case. Currently, setting `log_level` to `"passive"` is the same as setting it to `"info"`. Likewise, setting `log_level_replica` to `"passive"` is the same as setting it to `"warning"`. This PR fixes this and changes the default of `log_level_replica` to `"warning"` to have the same default for it. The question is whether we should change the default of `log_level` to `"info"` to have the same behavior as before, or leave it as is which would set it to warning unless the user has set their own Transformers verbosity to info like in the examples. Related to #20154
02-20-2023 09:37:43
02-20-2023 09:37:43
_The documentation is not available anymore as the PR was closed or merged._<|||||>while at it could you please remind me what does `passive` actually imply? > a 'passive' level which doesn't set anything and lets the application set the level how does the application set the level? which application? perhaps we need a small example?<|||||>Hmm, this broke many deepspeed tests that were relying on info log level in deepspeed. now deepspeed is no longer logging info `"DeepSpeed info: version={}, git-hash={}, git-branch={}"` and thus the tests fail as it is looking for this string to tell DS is running. Now, why would this change impact an underlying component I wonder.<|||||>OK, I now have to explicitly pass ` log_level="info" to trainer args to have the previous functionality. This looks like a BC breakage, no? I adapted `get_regression_trainer` to have the original behavior here: https://github.com/huggingface/transformers/pull/21769 so it's all back to working.<|||||>Yes, I did mention it changed the behavior in the description of the PR and asked for how to proceed. You and Lysandre both agreed the break was worth it in this case.<|||||>Totally, Sylvain. I guess I struggle to understand when a BC breakage is ok and when it's not.
transformers
21,699
closed
Graphormer fix
Removes failing call to `requires_backend` in Graphormer model since the model uses `is_cython_available` instead. @ydshieh
02-20-2023 09:20:37
02-20-2023 09:20:37
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,698
closed
Pass along revision in dynamic code fetch
# What does this PR do? The `revision` argument wasn't passed along to `cached_file` when fetching a dynamic config/modeling file, this PR fixes that. Partially fixes #21662
02-20-2023 09:18:56
02-20-2023 09:18:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,697
closed
Fix-rag-finetune-project-requirement
# What does this PR do? Should fix #21692, the requirements for pytorch lightnings need to be pinned to `<=1.60`
02-20-2023 09:18:44
02-20-2023 09:18:44
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,696
closed
added file
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-20-2023 00:11:56
02-20-2023 00:11:56
commit pr<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
21,695
closed
fix LayoutLMv3TokenizerFast subword label after 'Ġ' token
LayoutLMv3TokenizerFast produces empty 'Ġ' token with `offset_mapping = (0, 0)`. Next token is wrongly assumed to also be beginning of word and isn't correctly assigned `pad_token_label`. This may lead to misalignment of words and token representations. Other BPE tokenizers might be affected Add check for previous token if it had an empty `offset_mapping` (not including special tokens) Remove copy check from LayoutLMv2TokenizerFast for `_batch_encode_plus` because it is not affected (uses WordPiece instead of BPE) Modify test with text that produce 'Ġ' token. Fixes issue: #19978 @NielsRogge @ArthurZucker
02-19-2023 20:03:13
02-19-2023 20:03:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21695). All of your documentation changes will be reflected on that endpoint.<|||||>Also cc @amyeroberts <|||||>Hi @ArthurZucker, thanks for your investigations. This PR fixes the problem for LayoutLMv3 but I expect the problem to exist on other models using Fast BPE tokenization, I will take a look when I can to list all impacted models that need a fix.<|||||>Thanks a lot for this fix, would you be able to take into account my comment such that we can merge it? 🙏 Thanks! Btw the same fix could then be applied to LayoutLMv2 and LayoutXLM<|||||>LayoutLMv2 uses WordPiece and not BPE. From what I saw its vocabulary does not contain empty token and thus cannot produce (0, 0) offset_mapping when encoding.
transformers
21,694
closed
Apply ruff flake8-comprehensions
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #21693 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Pinging @sgugger since this just enables additional checks in ruff and improves code quality. All the flake8-comprehensions checks are only included in the plugin if they demonstrably increase readability and perf. Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-19-2023 19:50:41
02-19-2023 19:50:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>Seems like the two failing tests are flake? One of the failing tests seems unable to import torch for some reason.<|||||>Yes the two test failures are irrelevant (this is a test we normally launch on its own in SageMaker, not with the test suites). Thanks a lot for applying this, the result is a lot nicer!
transformers
21,693
closed
Enable flake8-comprehension ruff checks
### Feature request * Ruff was recently added as a linter to huggingface transformers. It provides support out of the box for flake8-comprehension checks which improve list/set/dict comprehensions in Python and make them more readable and faster. Additionally, ruff has autofixits for all these rules so applying them automatically to the codebase is really straightforward and should improve the style of the codebase and make it more readable. ### Motivation Better Faster / Improved Code ### Your contribution Applying flake8-comprehension linter to library and enabling it.
02-19-2023 19:47:37
02-19-2023 19:47:37
transformers
21,692
closed
RAG: Which version of pytorch-forecasting to use for fintetune-rag.sh?
### System Info - `transformers` version: 4.24.0 - Platform: Linux-3.10.0-1160.83.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.11.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction The latest version of pytorch-forecasting is not comptabible with the RAG module of transformers. I was able to successfuly create my own knowledge dataset but the finetune-rag.py step fails. I also looked at the version history and experimented with 1.6.4 throws an error I was wondering if there's a known version of pytorch_lightning that works with RAG. Steps to Reproduce: 1. From transformers root, run a shell script with `python examples/research_projects/rag/finetune_rag.py --data_dir /home/rparik/linkedInEmpForKnowledgeClean/data.csv \ --output_dir ./finetune_rag_output/ \ --model_name_or_path facebook/rag-token-nq \ --model_type rag_sequence \ --fp16 \ --gpus 1 \ --index_name custom \ --passages_path /home/rparik/projects/rag/transformers/linkKnowledgeBase/my_knowledge_dataset \ --index_path /home/rparik/projects/rag/transformers/linkKnowledgeBase/my_knowledge_dataset_hnsw_index.faiss \ ` Error: ``` File "examples/research_projects/rag/finetune_rag.py", line 17, in <module> from pytorch_lightning.plugins.training_type import DDPStrategy ModuleNotFoundError: No module named 'pytorch_lightning.plugins.training_type' ``` ### Expected behavior Script runs without errors
02-19-2023 19:26:15
02-19-2023 19:26:15
Looks like 1.6.0 worked<|||||>The requirement file states `pytorch-lightning >= 1.5.10`. Were you using an older version or a newer version? (shoud I pin version 1.6.0 as the max version?)<|||||>I can confirm that 1.9.1 didn't work, neither did 1.6.4. haven't tried 1.6.1-1.6.3 On Mon, Feb 20, 2023, 01:17 Arthur ***@***.***> wrote: > The requirement file states pytorch-lightning >= 1.5.10. Were you using > an older version or a newer version? > (shoud I pin version 1.6.0 as the max version?) > > — > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/21692#issuecomment-1436603658>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABVQKHJ2YDESVJCITYJPPXDWYMZCTANCNFSM6AAAAAAVBEDYJU> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> >
transformers
21,691
closed
Arijitx/wav2vec2 alignment
<img width="239" alt="result_igf" src="https://user-images.githubusercontent.com/88912522/219967634-5a2a0d23-c878-47e4-9278-058b1d6d44f1.png"> [WAUKEAFM2CA132599#](revert-1-Marco071086-patch-1) What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-19-2023 18:27:42
02-19-2023 18:27:42
cc @sanchit-gandhi <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,690
closed
save_vocabulary() got an unexpected keyword argument 'filename_prefix'
### System Info hi, Im trying to fine-tune T5ForConditionalGeneration model using trainer.train(). Im getting the following error: ``` Saving model checkpoint to models/simple-finetuned-T5CHEM-to-SSRT/checkpoint-500 Configuration saved in models/simple-finetuned-T5CHEM-to-SSRT/checkpoint-500/config.json Model weights saved in models/simple-finetuned-T5CHEM-to-SSRT/checkpoint-500/pytorch_model.bin tokenizer config file saved in models/simple-finetuned-T5CHEM-to-SSRT/checkpoint-500/tokenizer_config.json Special tokens file saved in models/simple-finetuned-T5CHEM-to-SSRT/checkpoint-500/special_tokens_map.json added tokens file saved in models/simple-finetuned-T5CHEM-to-SSRT/checkpoint-500/added_tokens.json --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [27], in <module> ----> 1 trainer.train() File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1521, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1516 self.model_wrapped = self.model 1518 inner_training_loop = find_executable_batch_size( 1519 self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size 1520 ) -> 1521 return inner_training_loop( 1522 args=args, 1523 resume_from_checkpoint=resume_from_checkpoint, 1524 trial=trial, 1525 ignore_keys_for_eval=ignore_keys_for_eval, 1526 ) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:1840, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1837 self.state.epoch = epoch + (step + 1) / steps_in_epoch 1838 self.control = self.callback_handler.on_step_end(args, self.state, self.control) -> 1840 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) 1841 else: 1842 self.control = self.callback_handler.on_substep_end(args, self.state, self.control) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2069, in Trainer._maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval) 2066 self._report_to_hp_search(trial, self.state.global_step, metrics) 2068 if self.control.should_save: -> 2069 self._save_checkpoint(model, trial, metrics=metrics) 2070 self.control = self.callback_handler.on_save(self.args, self.state, self.control) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2141, in Trainer._save_checkpoint(self, model, trial, metrics) 2138 self.store_flos() 2140 output_dir = os.path.join(run_dir, checkpoint_folder) -> 2141 self.save_model(output_dir, _internal_call=True) 2142 if self.deepspeed: 2143 # under zero3 model file itself doesn't get saved since it's bogus! Unless deepspeed 2144 # config `stage3_gather_16bit_weights_on_model_save` is True 2145 self.deepspeed.save_checkpoint(output_dir) File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2631, in Trainer.save_model(self, output_dir, _internal_call) 2628 self.deepspeed.save_checkpoint(output_dir) 2630 elif self.args.should_save: -> 2631 self._save(output_dir) 2633 # Push to the Hub when `save_model` is called by the user. 2634 if self.args.push_to_hub and not _internal_call: File /opt/conda/lib/python3.8/site-packages/transformers/trainer.py:2685, in Trainer._save(self, output_dir, state_dict) 2683 self.model.save_pretrained(output_dir, state_dict=state_dict) 2684 if self.tokenizer is not None: -> 2685 self.tokenizer.save_pretrained(output_dir) 2687 # Good practice: save your training arguments together with the trained model 2688 torch.save(self.args, os.path.join(output_dir, TRAINING_ARGS_NAME)) File /opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:2132, in PreTrainedTokenizerBase.save_pretrained(self, save_directory, legacy_format, filename_prefix, push_to_hub, **kwargs) 2128 logger.info(f"Special tokens file saved in {special_tokens_map_file}") 2130 file_names = (tokenizer_config_file, special_tokens_map_file) -> 2132 save_files = self._save_pretrained( 2133 save_directory=save_directory, 2134 file_names=file_names, 2135 legacy_format=legacy_format, 2136 filename_prefix=filename_prefix, 2137 ) 2139 if push_to_hub: 2140 self._upload_modified_files( 2141 save_directory, repo_id, files_timestamps, commit_message=commit_message, token=token 2142 ) File /opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py:2176, in PreTrainedTokenizerBase._save_pretrained(self, save_directory, file_names, legacy_format, filename_prefix) 2173 f.write(out_str) 2174 logger.info(f"added tokens file saved in {added_tokens_file}") -> 2176 vocab_files = self.save_vocabulary(save_directory, filename_prefix=filename_prefix) 2178 return file_names + vocab_files + (added_tokens_file,) TypeError: save_vocabulary() got an unexpected keyword argument 'filename_prefix' ``` So Im not sure what's happening exactly, especially that this argument is Optional. Im using tokenizers==0.12.1 & transformers==4.22.0 I would appreciate any help! ### Who can help? _No response_ ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Im using the model published here: https://github.com/HelloJocelynLu/t5chem ``` from transformers import AutoModelForSeq2SeqLM, DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer batch_size = 16 model_name = model_checkpoint.split("/")[-1] model_path = "/home/jovyan/workbench-shared-folder/retro-syn/models/pretrain/simple/" model = T5ForConditionalGeneration.from_pretrained(model_path) tokenizer = SimpleTokenizer(vocab_file=model_path + 'vocab.pt') args = Seq2SeqTrainingArguments( f"models/{model_name}-finetuned-T5CHEM-to-SSRT", evaluation_strategy = "epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, weight_decay=0.01, save_total_limit=3, num_train_epochs=1, predict_with_generate=True, fp16=True, push_to_hub=False, ) train_path_source = 'data/USPTO_50k/train.source' train_path_target = 'data/USPTO_50k/train.target' valid_path_source = 'data/USPTO_50k/val.source' valid_path_target = 'data/USPTO_50k/val.target' train_data = pd.read_csv(train_path_source, header = None) train_data = train_data.rename(columns = {0:'product'}) train_data_reactant = pd.read_csv(train_path_target, header = None) train_data_reactant = train_data_reactant.rename(columns = {0:'reactant'}) train_dataset = pd.concat([train_data, train_data_reactant], axis = 1) train_dataset.head(3) data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) max_input_length = 128 max_target_length = 128 task_type2 = "Reactants:" def preprocess_function(train_dataset): inputs = [task_type2 + ex for ex in train_dataset["product"]] targets = [ex for ex in train_dataset["reactant"]] model_inputs = tokenizer(inputs, max_length=max_input_length, truncation=True, padding=True, return_tensors='pt')#.squeeze(0) print(model_inputs) # Setup the tokenizer for targets with tokenizer.as_target_tokenizer(): labels = tokenizer(targets, max_length=max_target_length, truncation=True, padding=True, return_tensors='pt') #print(labels.shape) model_inputs["labels"] = labels["input_ids"] return model_inputs import datasets raw_datasets = datasets.DatasetDict({'train': datasets.Dataset.from_dict(train_dataset), 'val': datasets.Dataset.from_dict(valid_dataset)}) tokenized_datasets = raw_datasets.map(preprocess_function, batched=True) from transformers.trainer_utils import PredictionOutput from typing import Dict, List, NamedTuple def AccuracyMetrics(model_output: PredictionOutput) -> Dict[str, float]: label_ids: np.ndarray = model_output.label_ids # type: ignore predictions: np.ndarray = model_output.predictions.reshape(-1, label_ids.shape[1]) # type: ignore correct: int = np.all(predictions==label_ids, 1).sum() return {'accuracy': correct/len(predictions)} tokenized_datasets = tokenized_datasets.remove_columns(['product', 'reactant', 'token_type_ids']) trainer = Seq2SeqTrainer( model, args, train_dataset=tokenized_datasets['train'], eval_dataset=tokenized_datasets['val'], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=AccuracyMetrics, ) trainer.train() ``` ### Expected behavior I would expect the model to be trained without errors.
02-19-2023 13:50:12
02-19-2023 13:50:12
The tokenizer you are passing to the `Trainer` does not look like it comes from the Transformers library and the reason you are getting the error is that its `save_pretrained` method doesn't look like it works. Just remove the line `tokenizer=tokenizer` in the creation of the `Seq2SeqTrainer` and you should be able to train.<|||||>Thanks so much for a prompt response, indeed that was the issue :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,689
closed
Make schedulers picklable
### Feature request Change lambda functions passed to `LambdaLR` in `get_constant_schedule`, `get_constant_schedule_with_warmup`, `get_linear_schedule_with_warmup`, `get_cosine_schedule_with_warmup`, `get_cosine_with_hard_restarts_schedule_with_warmup` and `get_polynomial_decay_schedule_with_warmup` to callable objects. ### Motivation Python cannot serialize lambda and local functions. Torch created a workaround around this in their `state_dict` method of `LambdaLR` by not returning any non-picklable functions: ```python ... for idx, fn in enumerate(self.lr_lambdas): if not isinstance(fn, types.FunctionType): state_dict['lr_lambdas'][idx] = fn.__dict__.copy() return state_dict ``` While this approach is fine when LR schedule is constant and deterministic, it makes it impossible to change the schedule mid training dynamically using lambda functions since any changes will not be saved to checkpoints. In my particular case I wanted to implement a dynamic LR schedule based on evaluation metrics. I've implemented a wrapper around `LambdaLR` that applies transformation `fn: float -> float` to existing LR schedule: ```python class LambdaWrapper: def __init__(self, lr_lamda: Callable[[Union[float, int]], float], wrapper_function: Callable[[float], float]): self._wrapper_function = wrapper_function self._lr_lambda = lr_lamda def __call__(self, x: Union[float, int]): return self._wrapper_function(self._lr_lambda(x)) class DynamicScheduler: def __init__(self, lr_scheduler: LambdaLR): self._scheduler = lr_scheduler def __getattr__(self, item): # Calling the super class to avoid recursion return getattr(super(DynamicScheduler, self).__getattribute__('_scheduler'), item) def wrap_schedule(self, fn: Callable[[float], float]): """If you want this object to be picklable, pass only picklable callable objects as `fn`!""" wrappers_builder = partial(LambdaWrapper, wrapper_function=fn) # wrap in callable object to preserve picklability self._scheduler.lr_lambdas = list(map(wrappers_builder, self._scheduler.lr_lambdas)) ``` I've taken special care to preserve picklability, however, since `LambdaLR` instances created by `transformers` library hold lambda and local functions in them, pickling of `DynamicScheduler` (as well as it's state, which is the same as the wrapped `LambdaLR` state) fails. While reimplementing dynamic scheduling with lambda functions will allow the `torch` workaround that handles lambda functions in scheduler, the whole point of dynamic scheduling will be lost since the complex dynamically constructed lambdas: `f_n(f_n-1(...f_1(schedule(x))...))` will fall back to their default state: `schedule(x)`. Here is the callback I use to track evaluation metrics for anyone interested: ```python def get_warmup_steps(args: TrainingArguments, state: TrainerState) -> int: return ( args.warmup_steps if args.warmup_steps > 0 else math.ceil(state.max_steps * args.warmup_ratio) ) class DecreaseLRTransformer: def __init__(self, decrease_ratio: float): if decrease_ratio < 0.0 or decrease_ratio > 1.0: raise ValueError('Decrease ratio should be within [1.0, 0.0]') self._decrease_ratio = decrease_ratio def __call__(self, lr: float): return self._decrease_ratio * lr # Developer notice (may change in the future versions of transformers): # All kwargs have the following fields set: model, tokenizer, optimizer, lr_scheduler, train_dataloader, eval_dataloader class LRDecreaseCallback(TrainerCallback): """ A [`TrainerCallback`] that handles learning rate decrease based on evaluation metrics. """ def __init__(self, decrease_ratio: float, patience: int, *, decrease_on_warmup: bool = False, decrease_threshold: float = 0.0): self._transformer = DecreaseLRTransformer(decrease_ratio) self._patience = patience self._decrease_on_warmup = decrease_on_warmup self._decrease_threshold = decrease_threshold self._failed_checks = 0 def _metric_improved(self, new_metric: float, old_metric: float, *, greater_is_better: bool = True) -> bool: operator = np.greater if greater_is_better else np.less return operator(new_metric, old_metric) and abs(new_metric - old_metric) > self._decrease_threshold def check_metric_value(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, metric_value: float): # best_metric is set by code for load_best_model no_metric = (state.best_metric is None) warmup_steps = get_warmup_steps(args, state) skip_warmup = (self._decrease_on_warmup and warmup_steps >= state.global_step) if skip_warmup: return if no_metric or self._metric_improved(metric_value, state.best_metric, greater_is_better=args.greater_is_better): self._failed_checks = 0 control.should_save = True else: self._failed_checks += 1 def on_train_begin(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs): if args.metric_for_best_model is None: raise ValueError(f"{self.__class__.__name__} requires metric_for_best_model to be defined defined") if args.evaluation_strategy == IntervalStrategy.NO: raise ValueError(f"{self.__class__.__name__} requires IntervalStrategy of steps or epoch") def on_evaluate(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs): metrics: Dict[str, float] = kwargs['metrics'] lr_scheduler = kwargs['lr_scheduler'] if not isinstance(lr_scheduler, DynamicScheduler): logger.warning(f'{self.__class__.__name__} is not compatible with {lr_scheduler.__class__.__name__} scheduler! ' f'Wrap your scheduler with {DynamicScheduler.__class__.__name__} to change LR dynamically. ' f'{self.__class__.__name__} is disabled!') return metric_to_check = args.metric_for_best_model if not metric_to_check.startswith("eval_"): metric_to_check = f"eval_{metric_to_check}" metric_value = metrics.get(metric_to_check) if metric_value is None: logger.warning(f"{self.__class__.__name__} required metric_for_best_model, " f"but did not find {metric_to_check} in evaluation metrics. {self.__class__.__name__} is disabled!") return self.check_metric_value(args, state, control, metric_value) if self._failed_checks >= self._patience: lr_scheduler.wrap_schedule(self._transformer) self._failed_checks = 0 def on_log(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs): logs: Dict[str, float] = kwargs['logs'] logs['lr_decrease_patience'] = (self._patience - self._failed_checks) / self._patience ``` ### Your contribution The simplest and the cleanest workaround would be to make the local functions global: Intead of: ```python def get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps, last_epoch=-1): def lr_lambda(current_step: int): if current_step < num_warmup_steps: return float(current_step) / float(max(1, num_warmup_steps)) return max( 0.0, float(num_training_steps - current_step) / float(max(1, num_training_steps - num_warmup_steps)) ) return LambdaLR(optimizer, lr_lambda, last_epoch) ``` Do this: ```python def _linear_schedule_with_warmup_step(current_step: int, *, num_warmup_steps: int, num_training_steps: int) -> float: if current_step < num_warmup_steps: return float(current_step) / float(max(1, num_warmup_steps)) return max( 0.0, float(num_training_steps - current_step) / float(max(1, num_training_steps - num_warmup_steps)) ) def get_linear_schedule_with_warmup(optimizer, num_warmup_steps, num_training_steps, last_epoch=-1): schedule = partial(_linear_schedule_with_warmup_step, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps) return LambdaLR(optimizer, schedule, last_epoch) ``` When created with global functions, partial function are picklable: ```python >>>from functools import partial >>>import pickle >>>def f(x): ... print(x) >>>with open('f.pkl', 'wb') as file: ... pickle.dump(partial(f, x='Dog'), file) >>>with open('f.pkl', 'rb') as file: ... unpickled_f = pickle.load(file) >>>unpickled_f() Dog ``` The fix is straightforward and I can create a PR. Nonetheless, it would be my first contribution so I might need some help along the way.
02-19-2023 11:03:47
02-19-2023 11:03:47
Thanks for explaining your issue in depth, and happy to review a PR!
transformers
21,688
closed
[`bnb`] fix `bnb` decoders bug
# What does this PR do? Currently on the `main` branch, there is a silent bug with `bnb` and encoder-decoder models leading to some modules not being converted in `int8` for these models and hurting users that are using the `main` branch - such as `peft`: https://github.com/huggingface/peft/issues/108 With https://github.com/huggingface/transformers/pull/21579 being introduced, the check to know if we should keep the module as `nn.Linear` has been slightly changed and [being more robust ](https://github.com/huggingface/transformers/blob/7f1cdf18958efef6339040ba91edb32ae7377720/src/transformers/utils/bitsandbytes.py#L126). Before #21579 the function `get_keys_to_not_convert` used to return `['decoder', 'lm_head', 'wo']` which is wrong, and leading to all decoder layers being not converted in int8 with the new check as mentioned above. This PR fixes the bug that was in this function, and adds a new test to make sure this will never happen Fixes: https://github.com/huggingface/peft/issues/108 cc @sgugger @amyeroberts
02-19-2023 09:26:29
02-19-2023 09:26:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,687
closed
Initialize OPT 175B model for a long time
### System Info Recently I am trying to train OPT 175B (facebook/opt-175b) and found my code needs almost 10 hours to initialize the model weights before starting training. Actually my code is pretty simple ``` ... config = PretrainedConfig.from_json_file('175b.json') model = OPTForCausalLM(config) ... ``` Is there any issue of my code or I have to configure something to accelerate the initialization? @ArthurZucker , @younesbelkada ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I already posted the main part to cause the problem in problem description. ### Expected behavior I think the initialization computation should not be so long.
02-19-2023 06:24:42
02-19-2023 06:24:42
The line takes a very long time because there are 176 billions parameters to initialize. The initialization is also performed several times, which is a bug we recently fixed. If you use the latest main you might see it takes a bit less time.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,686
closed
Loading T5ForConditionalGeneration model by TFT5ForConditionalGeneration using from_pt=True
### System Info Ran the codes in Colab [here](https://colab.research.google.com/drive/1CJlQ5rVf5b3BysAnP6Y43I2LWl5ExcT2?usp=sharing) - `transformers` version: 4.26.1 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu116 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> I have ran these code in Colab notebook [here](https://colab.research.google.com/drive/1CJlQ5rVf5b3BysAnP6Y43I2LWl5ExcT2?usp=sharing) ### Who can help? @Rocketknight1 @ArthurZucker @gante ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction **Code:** ```python from transformers import T5Config, T5ForConditionalGeneration, TFT5ForConditionalGeneration distill_config = T5Config(d_model=256, d_kv = 32, d_ff=512, num_heads=4, decoder_start_token_id=0) model = T5ForConditionalGeneration(config=distill_config) model.save_pretrained("T5-pt") distill_config = T5Config(d_model=256, d_kv = 32, d_ff=512, num_heads=4, decoder_start_token_id=0) model = TFT5ForConditionalGeneration(config=distill_config) model.from_pretrained("T5-pt", from_pt=True) ``` **Output:** _The following warnings were observed for` t5-small `model too_ ``` Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFT5ForConditionalGeneration: ['lm_head.weight', 'encoder.embed_tokens.weight', 'decoder.embed_tokens.weight'] - This IS expected if you are initializing TFT5ForConditionalGeneration from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing TFT5ForConditionalGeneration from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model). All the weights of TFT5ForConditionalGeneration were initialized from the PyTorch model. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training. <transformers.models.t5.modeling_tf_t5.TFT5ForConditionalGeneration at 0x7f96b8372dc0> ``` _If your task is similar to the task the model of the checkpoint was trained on, you can already use TFT5ForConditionalGeneration for predictions without further training._ To check in case if the outputs were similar I used the following code to check a sample output: **Code:** Pytorch: ```python tokenizer = AutoTokenizer.from_pretrained('t5-small') def test(text, model, tokenizer): tokenized = tokenizer(text, return_tensors='pt', padding='max_length', truncation=True) print(tokenizer.batch_decode(model.generate(tokenized["input_ids"]).tolist(), skip_special_tokens=True)) test("summarize: i got permission to begin a start up company by my own..</s>", model, tokenizer) ``` TF2.0: ```python from tensorflow.python.ops.numpy_ops import np_config np_config.enable_numpy_behavior() def test(text, model, tokenizer): tokenized = tokenizer(text, return_tensors='tf', padding='max_length', truncation=True) print(tokenizer.batch_decode(model.generate(tokenized["input_ids"]).tolist(), skip_special_tokens=True)) test("summarize: i got permission to begin a start up company by my own..</s>", model, tokenizer) ``` Output: Pytorch: ``` ['cod cod cod cod cod cod cod cod cod cod cod cod cod cod cod cod cod cod cod']``` Tensorflow: ``` ['allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance allowance'] ``` I have ran these above code in Colab notebook [here](https://colab.research.google.com/drive/1CJlQ5rVf5b3BysAnP6Y43I2LWl5ExcT2?usp=sharing) ### Expected behavior I have trained a T5ForConditionalGeneration model with a custom config using Pytorch and now I am trying to load it into Tensorflow but some weights of the PyTorch model were not used when initializing the TF 2.0 model T5ForConditionalGeneration. I got the same warning when trying with `t5-small` I have ran these following code in Colab notebook [here](https://colab.research.google.com/drive/1CJlQ5rVf5b3BysAnP6Y43I2LWl5ExcT2?usp=sharing) I would like to know how to "properly" import a `T5ForConditionalGeneration` model that was trained in Pytorch to `TFT5ForConditionalGeneration`? I have ran these above code in Colab notebook [here](https://colab.research.google.com/drive/1CJlQ5rVf5b3BysAnP6Y43I2LWl5ExcT2?usp=sharing)
02-19-2023 03:46:44
02-19-2023 03:46:44
Hey! Thanks for submitting this issue. In T5, the encoder and decoder's embedding tokens are shared, and tied. As you can see here in the pytorch modeling code : ```python _keys_to_ignore_on_load_missing = [ r"encoder.embed_tokens.weight", r"decoder.embed_tokens.weight", r"lm_head.weight", ] ``` This is because the values stored in `shared.weight` are used for these 3 layers. The issue is that these layers get filled after initialisation and are then saved. The warning can be safely ignored as the `shared` layer was properly initialized. <|||||>The outputs of the model should be the same however. I can't reproduce your output, and when I run an inference on my side, the generated tokens are the same. Here is a snippet: ```python >>> from transformers import AutoTokenizer, T5Config, T5ForConditionalGeneration, TFT5ForConditionalGeneration, set_seed >>> set_seed(0) #set seed for reproducibility >>> model = T5ForConditionalGeneration.from_pretrained("t5-small") >>> model.save_pretrained("Arthur/T5-pt") >>> tf_model = TFT5ForConditionalGeneration.from_pretrained("Arthur/T5-pt", from_pt=True) >>> tokenizer = AutoTokenizer.from_pretrained("t5-small", padding='max_length', truncation=True) >>> inputs = tokenizer("this is a random input", return_tensors="pt") >>> model.generate(**inputs) # tensor([[0, 3, 5, 1]]) ``` ```python >>> inputs = tokenizer("this is a random input", return_tensors="tf") >>> tf_model.generate(**inputs) # <tf.Tensor: shape=(1, 4), dtype=int32, numpy=array([[0, 3, 5, 1]], dtype=int32)> ``` However it seems that if you use the a default config like the following: ```python >>> from transformers import AutoTokenizer, T5Config, T5ForConditionalGeneration, TFT5ForConditionalGeneration, set_seed >>> set_seed(0) #set seed for reproducibility >>> distill_config = T5Config(d_model=256, d_kv = 32, d_ff=512, num_heads=4, decoder_start_token_id=0) >>> model = T5ForConditionalGeneration(config=distill_config) >>> model.save_pretrained("Arthur/T5-pt") >>> tf_model = TFT5ForConditionalGeneration.from_pretrained("Arthur/T5-pt", from_pt=True) >>> tokenizer = AutoTokenizer.from_pretrained("t5-small", padding='max_length', truncation=True) >>> inputs = tokenizer("this is a random input", return_tensors="pt") >>> model.generate(**inputs) ``` the outputs do not match. This should not be expected<|||||>@ArthurZucker Is there any temporary quick fix to this problem? <|||||>The quickest fix is the following (tested locally) : `transformers-cli pt-to-tf --model-name "ArthurZ/T5-pt"`. This will make sure the conversion and the hidden states match. Will help you debug if there are any issues. In my case, conversion went well and logits match. Use `transformers-cli pt-to-tf --model-name <path_to_checkpoint_on_hub>`. Your model ( and the tokenizer) need to be updated and you need to be logged in using `huggingface-cli login `. See [here](https://huggingface.co/ArthurZ/T5-pt/discussions/1) for an example of the PR that will be automatically created to your repo<|||||>OKay, the issue stems from the fact that the model is `training`. If you make sure that `model.eval()` is done for both, everything is fixed! The following works. ```python >>> from transformers import AutoTokenizer, T5Config, T5ForConditionalGeneration, TFT5ForConditionalGeneration, set_seed >>> set_seed(0) #set seed for reproducibility >>> distill_config = T5Config(d_model=256, d_kv = 32, d_ff=512, num_heads=4, decoder_start_token_id=0) >>> model = T5ForConditionalGeneration(config=distill_config).eval() >>> model.save_pretrained("Arthur/T5-pt") >>> tf_model = TFT5ForConditionalGeneration.from_pretrained("Arthur/T5-pt", from_pt=True) >>> tokenizer = AutoTokenizer.from_pretrained("t5-small", padding='max_length', truncation=True) >>> inputs = tokenizer("this is a random input", return_tensors="pt") >>> model.generate(**inputs) ``` This is an expected behaviour, I think I will just update the documentation to make sure it is clearly stated that this can be a discrepancy. Closing this as fixed, unless you still have problems! 😉 <|||||>Thank you for the solution @ArthurZucker :)
transformers
21,685
closed
`modeling_opt.py` if `previous_key_values` given and `attention_mask==None` the model throws an error.
### System Info - `transformers` version: 4.26.1 - Platform: Linux-4.18.0-147.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ## Code 1. Load opt/tokenizer ```py from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "facebook/opt-125m" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` 2. Precompute `past_key_values` ```py text1 = "let's find a" tokenized1 = tokenizer(text1, return_tensors='pt') past_key_values = model(**tokenized1, use_cache=True)["past_key_values"] ``` 4. Compute another set of values without `attention_mask` ```py text2 = "bug" tokenized2 = tokenizer(text2, return_tensors='pt') model(input_ids=tokenized2["input_ids"], past_key_values=past_key_values) # error! The mistakenly created an attention_mask that is too small. ``` (try `distilgpt2` and it will work) ## stack trace ``` Traceback (most recent call last): File "/home/gkressi1/opt/ldet/rate_in-context.py", line 334, in <module> main() File "/home/gkressi1/opt/ldet/rate_in-context.py", line 325, in main output_config = compute_surprisals(config=config, model_object=model_object) File "/home/gkressi1/opt/ldet/rate_in-context.py", line 219, in compute_surprisals output_rating = model_object.incontext(config, prompt_list) File "/home/gkressi1/opt/ldet/src/model_objects/model_hf_causal_lm_big.py", line 85, in incontext output = self.get_model_output(rest_prompt, use_cache=True) File "/home/gkressi1/opt/ldet/src/model_objects/model_hf_causal_lm_big.py", line 63, in get_model_output output = self.model( File "/home/gkressi1/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/home/gkressi1/.conda/envs/llm/lib/python3.9/site-packages/accelerate/hooks.py", line 158, in new_forward output = old_forward(*args, **kwargs) File "/home/gkressi1/.conda/envs/llm/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 932, in forward outputs = self.model.decoder( File "/home/gkressi1/.conda/envs/llm/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) File "/home/gkressi1/.conda/envs/llm/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 639, in forward attention_mask = self._prepare_decoder_attention_mask( File "/home/gkressi1/.conda/envs/llm/lib/python3.9/site-packages/transformers/models/opt/modeling_opt.py", line 546, in _prepare_decoder_attention_mask expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask RuntimeError: The size of tensor a (93) must match the size of tensor b (1679) at non-singleton dimension 3 ``` ### Expected behavior The model should create the attention mask by itself and not throw an error. From the surface, this seems to be an easy fix: 1. Delete line [635](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/opt/modeling_opt.py#L635) and [636](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/opt/modeling_opt.py#L635) 2. Move line [639-642](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/opt/modeling_opt.py#L639) of what is currently line [637](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/opt/modeling_opt.py#L637) 3. Check TF/Flax models (?). All the best!
02-19-2023 02:36:16
02-19-2023 02:36:16
Hey! Thanks for submitting this issue! Passing attention maks solves the problem, and usually we expect to pass attention masks when you are using the `past_key_values`(for example in generate). It is debatable whether the default behaviour should rely on the past_key_values. Do you have a specific usage in mind? The following works as expected: ```python attn = torch.cat((tokenized1["attention_mask"], tokenized2["attention_mask"]), -1) text2 = "bug" tokenized2 = tokenizer(text2, return_tensors='pt') model(input_ids=tokenized2["input_ids"], past_key_values=past_key_values,attention_mask =attn) ``` This way is the expected usage. When training or doing an inference, you should probably be in a for loop where the attention mask is defined based on the entire input. <|||||>I agree that manually adding the attention_mask is an easy fix. I am using a shared context as `past_key_values` and then computing different model outputs given the context. In that case I save the contexts `past_key_values` and use them later on. It is easy to recompute/save the contexts attention_mask and concat it for every output - but * OPT model behavior is inconsistent to other model's I have been using (gpt-neo, bloom) * it is [not documented](https://huggingface.co/docs/transformers/v4.26.1/en/model_doc/opt#transformers.OPTForCausalLM.forward.past_key_values) that the expected usage is passing the `attention_mask` when using `past_key_values` * the thrown error is not descriptive of the issue I do not understand what you mean with "default behaviour should rely on the past_key_values" - it seems to me that default behavior is not affected by changing this: line [636](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/opt/modeling_opt.py#L636) seems to have exactly the same job that [639 - 642](https://github.com/huggingface/transformers/blob/ae54e3c3b18bac0832ad62ea9b896dfd52a09850/src/transformers/models/opt/modeling_opt.py#L639) has, just that it does not take into account `past_key_values` introducing the deviation of model behavior to other models. I can understand if you say that passing `attention_mask` is expected behavior for using `past_key_values`, but maybe that could be mentioned somewhere?<|||||>Totally agree with you, will open a PR to adress this. I think this was also blocking us from adding the ONNX config for this model! Thanks for this 😉
transformers
21,684
closed
Add loss for BridgeTowerForMaskedLM and BridgeTowerForImageAndTextRetrieval
# What does this PR do? This PR adds losses to BridgeTowerMaskedLM and BridgeTowerForImageAndTextRetrieval <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-19-2023 01:15:39
02-19-2023 01:15:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @amyeroberts and @younesbelkada <|||||>Thank @amyeroberts, @younesbelkada, @regisss for your review and your suggestions. We have addressed your comments and have added few tests for loss computation and for forward/backward as suggested ones. Can you please help to merge this PR if possible? Thanks a lot<|||||>Thank @amyeroberts for approving this PR. Thank @younesbelkada for your suggestion. I have resolved the failed quality tests. Can you please approve and merge this PR? c Thanks a lot<|||||>> Let's see what @ydshieh & @sgugger will say No problem for me, as this is used in > 100 places, and the tensor is changed to a python scalar before using it.
transformers
21,683
closed
Update summarization.mdx
Fix link in documentation Fixes # 21596 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-18-2023 23:31:47
02-18-2023 23:31:47
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21683). All of your documentation changes will be reflected on that endpoint.
transformers
21,682
open
kros_test
### Model description 1st test model trained by 100 pages ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation _No response_
02-18-2023 10:29:29
02-18-2023 10:29:29
transformers
21,681
closed
Default Datatype issue with model on OPT-13B
### System Info Any CPU Machine with Transformer 4.26.0 ### Who can help? _No response_ ### Information - [x] The official example scripts - [ ] My own modified scripts ### Tasks - [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` from transformers import AutoModelForCausalLM, AutoTokenizer import torch model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b") print(model.dtype) ``` torch.float32 is printed out ### Expected behavior Expect to be float16. The model saved in the HuggingFace repo is under float16 format. Convert to Float32 may mess up the behavior.
02-17-2023 21:47:20
02-17-2023 21:47:20
That is incorrect. The dtype of a model in PyTorch is always float32, regardless of the dtype of the checkpoint you saved. If you load a float16 checkpoint in a model you create (which is in float32 by default), the dtype that is kept at the end is the dtype of the model, not the dtype of the checkpoint. This is because many hardwares do not actually support other dtypes than float32 (for instance you won't be able to generate on the CPU if your model is in float16). To load a model in float16, you have to ask explicitly with `torch_dtype=torch.float16` in your `from_pretrained` call. To load the model in the precision saved, you have to use `torch_dtype="auto"`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,680
closed
model fine-tuning error
### System Info Hi to all, I fine-tune a model for my dataset. But I need some help with the inference execution. I train the model with a tokenizer without truncation, but I receive the first error in inference. So I tried to retrain the model with truncation activated, as shown in the code. Still, I encountered a new error during the training that only appeared after adding truncation into the tokenizer. Now, if I try to train the network without truncation on the tokenizer, the training not working, and I need help understanding what happens. ### Who can help? @ArthurZucker @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Inference error: RuntimeError: The size of tensor a (726) must match the size of tensor b (512) at non-singleton dimension 1 The function for classification is: ``` def classification_infer(data, model_path): # device device = find_device() # data preprocessing data[['notuseful', 'usefull']] = data['Descrizione'].apply(text_splitting) data = data.loc[~data['Classe'].isin([0, 1, 2])] # model loading model = AutoModelForSequenceClassification.from_pretrained(model_path, num_labels=3) tokenizer = AutoTokenizer.from_pretrained(model_path) print("Model loaded") print("Classifying...") classifier = pipeline('text-classification', model=model, tokenizer=tokenizer, device=device) tokenizer_kwargs = {'padding': True, 'truncation': True, 'max_length': 512, 'return_tensors': 'pt'} classifier_output = classifier(data['usefull'].tolist()) print("Classification completed") data['Classe'] = [int(x['label']) for x in classifier_output] return data ``` Training function, with truncation active: ``` def classification_train(data): # metric metric = evaluate.load('accuracy') def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) # Load tokenizer tokenizer_kwargs = {'padding': True, 'truncation': True, 'max_length': 512, 'return_tensors': 'pt'} tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-cased", tokenizer_kwargs=tokenizer_kwargs) # preprocessing function def preprocess_function(data): return tokenizer(data['text']) # device device = find_device() # data preprocessing data[['notuseful', 'usefull']] = data['Descrizione'].apply(text_splitting) # dataset creation dataset = pd.DataFrame() dataset[['text', 'label']] = data.loc[data['Classe'].isin([0, 1, 2]), ['usefull', 'Classe']] dataset['label'] = dataset['label'].astype(int) # dataset equilibrium dataset = dataset.groupby('label').head(100) dataset['text'] = dataset['text'].map(lambda x: x.lower()) # dataset split train, test = train_test_split(dataset, test_size=0.2, random_state=42) # huggingface dataset train = Dataset.from_pandas(train) train = train.map(preprocess_function, batched=True) test = Dataset.from_pandas(test) test = test.map(preprocess_function, batched=True) # Load model model = AutoModelForSequenceClassification.from_pretrained("dbmdz/bert-base-italian-cased", num_labels=3) model.to(device) # data collator data_collator = DataCollatorWithPadding(tokenizer=tokenizer) # training arguments training_args = TrainingArguments( output_dir='./model_weight', # output directory num_train_epochs=2, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation weight_decay=0.01, # strength of weight decay evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, metric_for_best_model='eval_accuracy', greater_is_better=True, report_to="wandb", # enable logging to W&B run_name="bert-base-italian-cased-fit-for-crm", # name of the W&B run (optional) label_names=['0', '1', '2'] ) # trainer trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train, # training dataset eval_dataset=test, # evaluation dataset data_collator=data_collator, # data collator tokenizer=tokenizer, # tokenizer compute_metrics=compute_metrics, # the callback that computes metrics of interest ) # train trainer.train() ``` Error during the evaluation step with the training function with truncation active: IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed training function withouth truncation: ``` def classification_train(data): # metric metric = evaluate.load('accuracy') def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) return metric.compute(predictions=predictions, references=labels) # Load tokenizer tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-italian-cased") # preprocessing function def preprocess_function(data): return tokenizer(data['text'], truncation=False) # device device = find_device() # data preprocessing data[['notuseful', 'usefull']] = data['Descrizione'].apply(text_splitting) # dataset creation dataset = pd.DataFrame() dataset[['text', 'label']] = data.loc[data['Classe'].isin([0, 1, 2]), ['usefull', 'Classe']] dataset['label'] = dataset['label'].astype(int) # dataset equilibrium dataset = dataset.groupby('label').head(100) dataset['text'] = dataset['text'].map(lambda x: x.lower()) # dataset split train, test = train_test_split(dataset, test_size=0.2, random_state=42) # huggingface dataset train = Dataset.from_pandas(train) train = train.map(preprocess_function, batched=True) test = Dataset.from_pandas(test) test = test.map(preprocess_function, batched=True) # Load model model = AutoModelForSequenceClassification.from_pretrained("dbmdz/bert-base-italian-cased", num_labels=3) model.to(device) # data collator data_collator = DataCollatorWithPadding(tokenizer=tokenizer) # training arguments training_args = TrainingArguments( output_dir='./model_weight', # output directory num_train_epochs=2, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation weight_decay=0.01, # strength of weight decay evaluation_strategy="epoch", save_strategy="epoch", load_best_model_at_end=True, metric_for_best_model='eval_accuracy', greater_is_better=True, report_to="wandb", # enable logging to W&B run_name="bert-base-italian-cased-fit-for-crm", # name of the W&B run (optional) label_names=['0', '1', '2'] ) # trainer trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train, # training dataset eval_dataset=test, # evaluation dataset data_collator=data_collator, # data collator tokenizer=tokenizer, # tokenizer compute_metrics=compute_metrics, # the callback that computes metrics of interest ) # train trainer.train() ``` ### Expected behavior I expect that with the training function, I can train the network with the tokenizer arguments, and I expect the inference to work when I try to classify a text
02-17-2023 18:18:00
02-17-2023 18:18:00
I have found the solution. The problem is in the Bach dimension for the evaluation that is greater than the number of rows inside the evaluation dataset. Now I have set it to 16 and it works. But now I have a problem with the inference because I have trained the model with tokenizer max_length settings seated to 1024 and model with the same settings but when I use the model weight for the inference I continue to receive the error about the size of the tensor: RuntimeError: The size of tensor a (726) must match the size of tensor b (512) at non-singleton dimension 1<|||||>Hey! Could you give more details on the exact trace that you are getting? I have no idea where it comes from so can't really help, it could be a problem with loading the checkpoints or anything. Also can you share a simple inference reproducing script? Thanks! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,679
closed
Add ConvNeXT V2
# What does this PR do? Adds [ConvNeXT V2](https://arxiv.org/pdf/2301.00808.pdf) to transformers, including a backbone. ConvNeXT V2 features minimal changes to the `ConvNextLayer` and achieves an average 1% accuracy gain over ConvNext V1. Original repo is over [here](https://github.com/facebookresearch/ConvNeXt-V2). - [X ] Upload ImageNet 1K fine-tuned models - [x] Upload ImageNet 22K fine-tuned models - [x] Update model cards - [ ] Fix TF model bugs ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ X] Did you write any new necessary tests?
02-17-2023 15:33:51
02-17-2023 15:33:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,678
closed
cached_path disappeared from the API
### System Info - `transformers` version: 4.26.1 - Platform: Linux-5.15.0-60-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction A tool using an older version of transformers uses ``` from transformers.file_utils import cached_path ``` However, this api function disappeared sometime in 2022 but I could not find any information about what it should get replaced with or similar, change log or similar. However even in the current transformers repo, this method gets used in some example files, sometimes re-defining this method, sometimes using the same import which does not work any longer: ``` examples/research_projects/visual_bert/modeling_frcnn.py:from utils import WEIGHTS_NAME, Config, cached_path, hf_bucket_url, is_remote_url, load_checkpoint examples/research_projects/visual_bert/modeling_frcnn.py: resolved_archive_file = cached_path( examples/research_projects/visual_bert/utils.py: resolved_config_file = cached_path( examples/research_projects/visual_bert/utils.py:def cached_path( examples/research_projects/lxmert/modeling_frcnn.py:from utils import WEIGHTS_NAME, Config, cached_path, hf_bucket_url, is_remote_url, load_checkpoint examples/research_projects/lxmert/modeling_frcnn.py: resolved_archive_file = cached_path( examples/research_projects/lxmert/utils.py: resolved_config_file = cached_path( examples/research_projects/lxmert/utils.py:def cached_path( examples/research_projects/pplm/run_pplm.py:from transformers.file_utils import cached_path examples/research_projects/pplm/run_pplm.py: resolved_archive_file = cached_path(params["url"]) examples/research_projects/pplm/run_pplm.py: filepath = cached_path(BAG_OF_WORDS_ARCHIVE_MAP[id_or_path]) examples/research_projects/seq2seq-distillation/_test_bash_script.py:from transformers.file_utils import cached_path examples/research_projects/seq2seq-distillation/_test_bash_script.py: data_cached = cached_path( ``` ### Expected behavior The import should be possible for backwards compatibility or documentation explain what to replace it with. Version 4.21.0 seems the last version where that function could get imported
02-17-2023 14:25:21
02-17-2023 14:25:21
The `cached_path` API was a private util for our downloads (note that we consider as private anything that is not in the main init). None of the research projects are actively maintained so they will only work with the version of Transformers corresponding to their creation. You should use the [`huggingface_hub` library](https://github.com/huggingface/huggingface_hub) to manage downloads and cache of files on the Hub now. The closest thing we have to `cached_path` is `transformers.utils.hub.cached_file` in the current version.<|||||>Thanks, I will see if I can monkey-patch that tool/library accordingly.
transformers
21,677
closed
Protobuf 4 support
### Feature request Currently transformers requires protobuf 3 or lower https://github.com/huggingface/transformers/blob/a8eb4f79f946c5785f0e91b356ce328248916a05/setup.py#L141 Support for version 4 should be added. ### Motivation Some Python packages only work with protobuf 4 so transformers is incompatible with them (for example [flytekit](https://github.com/flyteorg/flytekit) >= 1.3). ### Your contribution -
02-17-2023 12:47:20
02-17-2023 12:47:20
Last time we checked, `protobuf>=4` was blowing up sentencepiece entirely, which is a dependency we really need in Transformers. I don't know if that has been fixed since then, maybe @ydshieh could check when he has some time?<|||||>Running T5 tokenization tests gets a lot of failure (T5 tokenizer use `sentencepiece` ), if I use `protobuf==4.22.0` Also, I see the following conflict when I installed latest `protobuf`. ```bash tensorflow 2.11.0 requires protobuf<3.20,>=3.9.2, but you have protobuf 4.22.0 which is incompatible. tensorboardx 2.5.1 requires protobuf<=3.20.1,>=3.8.0, but you have protobuf 4.22.0 which is incompatible. tensorboard 2.11.1 requires protobuf<4,>=3.9.2, but you have protobuf 4.22.0 which is incompatible. ray 2.0.0 requires protobuf<4.0.0,>=3.15.3, but you have protobuf 4.22.0 which is incompatible. onnx 1.12.0 requires protobuf<=3.20.1,>=3.12.2, but you have protobuf 4.22.0 which is incompatible. ``` If `tensorflow` is installed in this case, even pytorch tests will fail, as there is ```bash File "/home/huggingface/transformers-hf-gcp/src/transformers/trainer_utils.py", line 47, in <module> import tensorflow as tf ``` <|||||>So it looks like lots of libraries in our soft dependencies do not support protobuf 4 yet. We won't be able to offer support either until they do :-)
transformers
21,676
closed
Generate: eta sampling numerical stability
# What does this PR do? Minor numerical stability patch: before this change, the exception below was popping up sometimes, especially at lower numerical resolutions. Compute entropy from logits instead -> no exception <details> ```py │ /home/joao/transformers/src/transformers/generation/logits_process.py:423 in __call__ │ │ │ │ 420 │ │ # Calculate the adaptive cutoff │ │ 421 │ │ probabilities = scores.softmax(dim=-1) │ │ 422 │ │ print("probs > 0:", (probabilities > 0).sum(dim=1).max()) │ │ ❱ 423 │ │ entropy = torch.distributions.Categorical(probs=probabilities).entropy() │ │ 424 │ │ eta = torch.min(self.epsilon, torch.sqrt(self.epsilon) * torch.exp(-entropy))[.. │ │ 425 │ │ indices_to_remove = probabilities < eta │ │ 426 │ │ │ │ /home/joao/hf/lib/python3.10/site-packages/torch/distributions/categorical.py:66 in __init__ │ │ │ │ 63 │ │ self._param = self.probs if probs is not None else self.logits │ │ 64 │ │ self._num_events = self._param.size()[-1] │ │ 65 │ │ batch_shape = self._param.size()[:-1] if self._param.ndimension() > 1 else torch │ │ ❱ 66 │ │ super(Categorical, self).__init__(batch_shape, validate_args=validate_args) │ │ 67 │ │ │ 68 │ def expand(self, batch_shape, _instance=None): │ │ 69 │ │ new = self._get_checked_instance(Categorical, _instance) │ │ │ │ /home/joao/hf/lib/python3.10/site-packages/torch/distributions/distribution.py:56 in __init__ │ │ │ │ 53 │ │ │ │ value = getattr(self, param) │ │ 54 │ │ │ │ valid = constraint.check(value) │ │ 55 │ │ │ │ if not valid.all(): │ │ ❱ 56 │ │ │ │ │ raise ValueError( │ │ 57 │ │ │ │ │ │ f"Expected parameter {param} " │ │ 58 │ │ │ │ │ │ f"({type(value).__name__} of shape {tuple(value.shape)}) " │ │ 59 │ │ │ │ │ │ f"of distribution {repr(self)} " │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ValueError: Expected parameter probs (Tensor of shape (16, 32128)) of distribution Categorical(probs: torch.Size([16, 32128])) to satisfy the constraint Simplex(), but found invalid values: tensor([[0.0000e+00, 3.9062e-03, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 3.4485e-03, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 7.6953e-01, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], ..., [0.0000e+00, 6.3477e-03, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 2.8381e-03, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00], [0.0000e+00, 6.3479e-06, 0.0000e+00, ..., 0.0000e+00, 0.0000e+00, 0.0000e+00]], device='cuda:0', dtype=torch.bfloat16) ``` </details>
02-17-2023 11:02:40
02-17-2023 11:02:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,675
closed
Fix multi-gpu training error for LayoutLMv2
# What does this PR do? Fixes #14110 ## Issue When training a LayoutLMv2 model with multiple GPUs using `torchrun --standalone --nnodes=1 --nproc_per_node=$NUM_GPUS run_layoutlmv2.py` (single node, multi-gpu), I encounter ``` RuntimeError: Make sure the number of processes can be divided by the number of nodes ``` ## What this PR fixes Fixes a one character typo/bug to run using multiple GPUs <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @NielsRogge <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer: @stas00, Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
02-17-2023 10:40:29
02-17-2023 10:40:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @amyeroberts
transformers
21,674
closed
KerasMetricCallback expecting dictionary but receiving numpy array
### System Info Running in Google Colab with `!pip install transformers evaluate` as the first cell. The results of `transformers-cli env` are: - `transformers` version: 4.26.1 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.12.1 - PyTorch version (GPU?): 1.13.1+cu116 (False) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction [View the Google Colab here](https://colab.research.google.com/drive/1Pgc1jkZZbMmOF4O8Tz4Nz81V5FqkqCr-?usp=sharing) Code: ``` !pip install transformers evaluate import tensorflow as tf import evaluate from numpy import argmax as np_argmax from transformers import create_optimizer from transformers.keras_callbacks import KerasMetricCallback tf.debugging.disable_traceback_filtering() train_texts = ["This is class 0", "I am a class 1 sentence", "Class 2", "Also class 2"] train_labels = [0, 1, 2, 2] test_texts = ["A class 1 example", "Testing class 0"] test_labels = [1, 0] num_classes = 3 batch_size = 16 def create_dataset(texts, labels): dataset = tf.data.Dataset.from_tensor_slices((texts, labels)) return dataset.shuffle(10000).batch(batch_size).prefetch(tf.data.AUTOTUNE) train_dataset = create_dataset(train_texts, train_labels) test_dataset = create_dataset(test_texts, test_labels) encoder = tf.keras.layers.TextVectorization() encoder.adapt(train_dataset.map(lambda text, label: text)) model = tf.keras.Sequential([ encoder, tf.keras.layers.Embedding( input_dim=len(encoder.get_vocabulary()), output_dim=64, # Use masking to handle the variable sequence lengths mask_zero=True), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)), tf.keras.layers.Dense(64, activation='relu'), tf.keras.layers.Dense(num_classes, activation='sigmoid') ]) num_epochs = 5 batches_per_epoch = len(train_texts) // batch_size total_train_steps = int(batches_per_epoch * num_epochs) optimizer, schedule = create_optimizer(init_lr=2e-5, num_warmup_steps=0, num_train_steps=total_train_steps) model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(), optimizer=optimizer) accuracy = evaluate.load("accuracy") def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np_argmax(predictions, axis=1) return accuracy.compute(predictions=predictions, references=labels) metric_callback = KerasMetricCallback(metric_fn=compute_metrics, eval_dataset=test_dataset) callbacks = [metric_callback] model.fit(x=train_dataset, validation_data=test_dataset, epochs=num_epochs, callbacks=callbacks) ``` Full error stack trace: ``` Epoch 1/5 1/1 [==============================] - ETA: 0s - loss: 1.0996 --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-17-e77f61379ec7>](https://localhost:8080/#) in <module> ----> 1 model.fit(x=train_dataset, validation_data=test_dataset, epochs=num_epochs, callbacks=callbacks) 3 frames [/usr/local/lib/python3.8/dist-packages/keras/utils/traceback_utils.py](https://localhost:8080/#) in error_handler(*args, **kwargs) 59 def error_handler(*args, **kwargs): 60 if not tf.debugging.is_traceback_filtering_enabled(): ---> 61 return fn(*args, **kwargs) 62 63 filtered_tb = None [/usr/local/lib/python3.8/dist-packages/keras/engine/training.py](https://localhost:8080/#) in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing) 1710 epoch_logs.update(val_logs) 1711 -> 1712 callbacks.on_epoch_end(epoch, epoch_logs) 1713 training_logs = epoch_logs 1714 if self.stop_training: [/usr/local/lib/python3.8/dist-packages/keras/callbacks.py](https://localhost:8080/#) in on_epoch_end(self, epoch, logs) 452 logs = self._process_logs(logs) 453 for callback in self.callbacks: --> 454 callback.on_epoch_end(epoch, logs) 455 456 def on_train_batch_begin(self, batch, logs=None): [/usr/local/lib/python3.8/dist-packages/transformers/keras_callbacks.py](https://localhost:8080/#) in on_epoch_end(self, epoch, logs) 236 predictions = {key: predictions[key] for key in self.output_cols} 237 else: --> 238 predictions = {key: val for key, val in predictions.items() if key not in ignore_keys + ["loss"]} 239 prediction_list.append(predictions) 240 if not self.use_keras_label: AttributeError: 'numpy.ndarray' object has no attribute 'items' ``` ### Expected behavior The code is adapted from a [HuggingFace text classification tutorial](https://huggingface.co/docs/transformers/tasks/sequence_classification#text-classification) and a [TensorFlow text classification with an RNN tutorial](https://www.tensorflow.org/text/tutorials/text_classification_rnn). The optimizer, metrics and callbacks are from the HuggingFace tutorial. The encoder and model are from the TensorFlow tutorial. The error given is `AttributeError: 'numpy.ndarray' object has no attribute 'items'` and occurs from [line 237 of keras_callbacks.py](https://github.com/huggingface/transformers/blob/main/src/transformers/keras_callbacks.py#L237). The code around this seems to expect to be dealing with a dictionary or a subclass of a dictionary as a result of the `predict_on_batch` function called on the model. However, [the documentation for the TensorFlow Model class](https://www.tensorflow.org/api_docs/python/tf/keras/Model#predict_on_batch), [the documentation for the TensorFlow Sequential Class (which subclasses Model) ](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential#predict_on_batch) and [the source code for the `predict_on_batch` method](https://github.com/keras-team/keras/blob/v2.11.0/keras/engine/training.py#L2547-L2572) show that it returns a numpy array. I would expect this code not to error, and the callback to successfully call the metrics function with the expected predictions.
02-17-2023 10:35:24
02-17-2023 10:35:24
cc @Rocketknight1 <|||||>Hi @leadbetterben, the problem arises because the metric callback was intended for use with `transformers` models, which generally return dicts or tuples of outputs rather than just a single array. This was an oversight on my part - I'll see if I can push a fix!<|||||>@leadbetterben I've created [a PR](https://github.com/huggingface/transformers/pull/21727) to resolve this issue. Can you try it out? To use the PR branch, replace the first block in your Colab notebook with this: ``` !pip install git+https://github.com/huggingface/transformers.git@metric_callback_fix !pip install evaluate ```<|||||>@Rocketknight1 I've just tested it out and it seems to work fine. Thank you for that 👍 <|||||>@leadbetterben The PR has now been merged. You can use it by installing `transformers` from `main` with `!pip install git+https://github.com/huggingface/transformers.git`. It'll also be included in our next release, after which you can go back to just using `pip install transformers`. Thanks again for the bug report!<|||||>Thank you @Rocketknight1 !