repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
βŒ€
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
21,069
closed
Update squad.py
# What does this PR do? Fix a bug for the Splinter Tokenizer to account for the extra [QUESTION] and period token. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-09-2023 22:35:43
01-09-2023 22:35:43
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21069). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,068
closed
DeBerta Wrong Dimension for MLM Prediction Head
### System Info From the [original implementation](https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/deberta/mlm.py#L21), the MLM head transforms the hidden dimension to the embedding dimension. However, it seems that in the HF version, we go from `hidden_size` to `hidden_size`. Shouldn't it be from `hidden_size` to `embedding_size`, especially since the [embeddings get tied](https://github.com/huggingface/transformers/blob/v4.24.0/src/transformers/modeling_utils.py#L1203) eventually? ### Who can help? @ArthurZucker and @younesbelkada ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction N/A ### Expected behavior ` DebertaPredictionHeadTransform.dense.weight` should be of size `hidden_size, embedding_size`, not `hidden_size, hidden_size`
01-09-2023 22:35:11
01-09-2023 22:35:11
Not sure I understand where you found that we only have from `hidden_size` to `hidden_size`, but for `MLM`, we use the `DebertaOnlyMLMHead`, which uses a `decoder` that is the prediction head. See `self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False)` [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta/modeling_deberta.py#L1146)<|||||>Ah sorry I forgot to include this line! https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L872 It looks like it checks for an attribute `embedding_size` in the config, but defaults to `hidden_size` if not found. It doesn’t seem like that attribute is present in the config though? Am I setting up the model incorrectly?<|||||>Okay, again the head that you mentioned : `DebertaPredictionHeadTransform` is just a head, and it does not use the `DebertaV2Embedding`. It is only used as such (and not as an entire model). The size is correct as what you are looking for it in the `DebertaForMaskedLM`. The `self.model` attribute encompasses the `embedding` layer. <|||||>Hm okay thanks for the help! Seems I misunderstood
transformers
21,067
closed
Update task summary
This is the second part of updating the task summary to be more conceptual. After a brief introduction and background to the tasks Transformers can solve in [part 1](https://github.com/huggingface/transformers/pull/21014), this PR is a bit more advanced and digs deeper into explaining how Transformer solves these tasks. ### To-do: - [x] Add computer vision section - [x] Add NLP section
01-09-2023 21:52:11
01-09-2023 21:52:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>Ok, I'm finally finished with the first draft (took a bit longer to learn some models I wasn't familiar with)! I'd appreciate a general review of the scope of this page to make sure we're aligned (ie, are some sections too in-depth, are some not explained well enough?). Thanks in advance @sgugger @MKhalusova ! πŸ₯Ή Afterward, I'll ping one of our audio and computer vision experts for a more in-depth review of those sections πŸ™‚ <|||||>Thanks for the feedback, I added some images to go along with the text! @NielsRogge, would you mind reviewing the computer vision section? This guide is a high-level overview, and the goal is to help users understand how a certain task is solved by a model. Please feel free to let me know if it's too detailed, not detailed enough, or if I got something wrong! Also, if you know of a good beginner's resource for computer vision we can link to, that'd be great as well to set expectations for the reader. Thanks! πŸ‘ @sanchit-gandhi, if you could do the same with the audio section, that'd be awesome. Thank you! πŸ‘
transformers
21,066
closed
Update docstring for CLIPConfig
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes missing imports in CLIPConfig docstring. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-09-2023 18:41:29
01-09-2023 18:41:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>Yes, you will need to refresh your circleCI permissions and push an empty commit so we can check the tests are passing.<|||||>@sgugger Done.
transformers
21,065
closed
Fixed issue #21053
# What does this PR do? There was a typo in Line 449 in [huggingface/transformers/blob/main/src/transformers/models/gpt2/modeling_tf_gpt2.py](https://github.com/huggingface/transformers/blob/48d4e147d824efab97637947709d5aa67c809b3d/src/transformers/models/gpt2/modeling_tf_gpt2.py#L449) where the code was doing a check between input_ids and self.vocab_size but resize_token_embeddings change self.config.vocab_size so we were getting the error described in the issue, to overcome this I replaced it with self.config.vocab_size and it worked. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-09-2023 15:35:38
01-09-2023 15:35:38
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante and @Rocketknight1 <|||||>@sgugger feel free to merge if you approve. As I wrote above, other models have a similar problem (which require a more elaborate fix)<|||||>@susnato can you remove the `Fixes https://github.com/huggingface/transformers/issues/21053` at the top? That way, the issue stays open and I'll likely won't forget to fix the other models :)<|||||>> @susnato can you remove the `Fixes https://github.com/huggingface/transformers/issues/21053` at the top? That way, the issue stays open and I'll likely won't forget to fix the other models :) Hi, @gante I removed the line...is it ok now?<|||||>> This is absolutely correct. `self.vocab_size` can easily get stale when the vocabulary gets updated, and the check should be done against the config. > > (there are other models with this issue, where the fix needs to be slightly different, so I'll have a look very soon) Hi, @gante if you want, I would be happy to look into this and fix if I can.<|||||>@susnato sounds good! My plan consists in removing all references to `self.vocab_size`, deleting the variable whenever it is a variable that is set at `__init__` time from the `config` (if needed, store the `config` in `self.config` instead, since it will hold the mutable vocabulary size). If you search for "tf.cast(self.vocab_size", you will find all matches that will likely have to be touched.<|||||>> @susnato sounds good! > > My plan consists in removing all references to `self.vocab_size`, deleting the variable whenever it is a variable that is set at `__init__` time from the `config` (if needed, store the `config` in `self.config` instead, since it will hold the mutable vocabulary size). > > If you search for "tf.cast(self.vocab_size", you will find all matches that will likely have to be touched. Hi @gante I am going to check for all models in `src/transformers/models/modeling_tf_<model>.py` to remove references of self.vocab_size and also I found some references of self.vocab_size in some of the `<model>MLMHead`, I need to change them too right? <|||||>@susnato yes. If we look at the corresponding PT implementation e.g. for Albert, the layer classes store `self.config = config` for future use, as opposed to individual attributes of `config`. Making the switch here protects us from errors like the one that originated this PR :)
transformers
21,064
closed
Preserving gradient flow through Clip Processor
### Feature request Hi, I was using the HF CLIP implementation to build a VQGAN CLIP Implementation and noticed that the CLIPProcessor forces conversion to PIL Images for efficiency. However, when inputting torch tensor images to the processor, this breaks gradient flow. ### Motivation Would like to be able to backpropagate through CLIP image processing steps ### Your contribution This was my quick and hacky fix, using torchvision to do the same transformations and processing steps. I'd be happy to properly code up a better equivalent and submit a pull request if you think this is a feature worth adding. ``` class ProcessorGradientFlow(): """ This wraps the huggingface CLIP processor to allow backprop through the image processing step. The original processor forces conversion to numpy then PIL images, which is faster for image processing but breaks gradient flow. """ def __init__(self, device="cuda") -> None: self.device = device self.processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14") self.image_mean = [0.48145466, 0.4578275, 0.40821073] self.image_std = [0.26862954, 0.26130258, 0.27577711] self.normalize = torchvision.transforms.Normalize( self.image_mean, self.image_std ) self.resize = torchvision.transforms.Resize(224) self.center_crop = torchvision.transforms.CenterCrop(224) def preprocess_img(self, images): images = self.center_crop(images) images = self.resize(images) images = self.center_crop(images) images = self.normalize(images) return images def __call__(self, images=[], **kwargs): processed_inputs = self.processor(**kwargs) processed_inputs["pixel_values"] = self.preprocess_img(images) processed_inputs = {key:value.to(self.device) for (key, value) in processed_inputs.items()} return processed_inputs ```
01-09-2023 14:09:07
01-09-2023 14:09:07
cc @ArthurZucker and @amyeroberts <|||||>Sorry, here's a properly formatted snippet :) These are just a few of the transformations in the hf implementation, but would be happy to properly implement all of the transforms in the CLIPImageProcessor class ``` class ProcessorGradientFlow(): """ This wraps the huggingface CLIP processor to allow backprop through the image processing step. The original processor forces conversion to numpy then PIL images, which is faster for image processing but breaks gradient flow. """ def __init__(self, device="cuda") -> None: self.device = device self.processor = CLIPProcessor.from_pretrained("openai/clip-vit-large-patch14") self.image_mean = [0.48145466, 0.4578275, 0.40821073] self.image_std = [0.26862954, 0.26130258, 0.27577711] self.normalize = torchvision.transforms.Normalize( self.image_mean, self.image_std ) self.resize = torchvision.transforms.Resize(224) self.center_crop = torchvision.transforms.CenterCrop(224) def preprocess_img(self, images): images = self.center_crop(images) images = self.resize(images) images = self.center_crop(images) images = self.normalize(images) return images def __call__(self, images=[], **kwargs): processed_inputs = self.processor(**kwargs) processed_inputs["pixel_values"] = self.preprocess_img(images) processed_inputs = {key:value.to(self.device) for (key, value) in processed_inputs.items()} return processed_inputs ``` <|||||>Hi @ErwannMillon, thanks for raising this issue! Unfortunately, you're right and the gradient flow won't be preserved when passing images through the image processor. This will occur even if the images aren't cast to `PIL.Image.Image` i.e. if `do_resize=False`, as all input images are converted to numpy arrays. This is to ensure all supported inputs (PIL images, and numpy, tensorflow, jax and pytorch arrays) are processed in the same way. Training VQGAN CLIP is a great use case for our CLIP models and seems like a good fit for a [research project example](https://github.com/huggingface/transformers/tree/main/examples/research_projects). If you would like to contribute this we'd be very happy to have it added to the repo and review any PRs. <|||||>Great, thanks for getting back to me. Would be happy to work on this in my spare time and submit a PR. But just to be clear, would you just be interested in having a VQGAN-CLIP specific research project that works around the issue with the HF Processor class, or a pull request that also modifies this class directly? (for example, with a preserve_gradient or convert_to_pil parameter that would use the torchvision transforms)<|||||>For the VQGAN-CLIP, I already have this repo that uses the HF clip model: https://github.com/ErwannMillon/Simple-VQGAN-CLIP I can clean this up some more to get it to the standard of the other projects in the research project examples you sent me, but was just wondering if you would be interested in extending the CLIPProcessor class<|||||>> Would be happy to work on this in my spare time and submit a PR. Great! Excited to have this added to the repo and seeing the PR :) > would you just be interested in having a VQGAN-CLIP specific research project that works around the issue with the HF Processor class, or a pull request that also modifies this class directly? A specific research project that works around the issue. For the processor class, you can choose what that looks like within the research project i.e. is it completely independent or an extension. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,063
closed
[WIP] [Whisper] Add specaugment
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Hi @ArthurZucker πŸ‘‹, As discussed in another conversation, in this PR I try to add [SpecAugment](https://arxiv.org/abs/1904.08779) to whisper models. It was used as one of regularization methods to train the `large-v2` model (https://github.com/openai/whisper/discussions/661). Here the `SpecAugment` is implemented into `WhisperFeatureExtractor` in numpy. It masks the computed fbank features along the time and the feature axis. Here are the steps in my mind. Please correct me if I miss something. - [x] Return `attention_mask` by `pad` function to get the actual input lengths in the batch. And rescale it from sample level to feature level (48000 -> 3000) - [x] Copy `_compute_mask_indices` function of wav2vec2, which will be used to generate masks - [x] Add `_mask_input_features` function to mask along time or feature axis - [ ] Add `apply_spec_augment`, `mask_time_prob`, etc to config and `__call__` function It's still in draft. I will add the parameters to config and fix the test errors later :) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-09-2023 10:33:37
01-09-2023 10:33:37
_The documentation is not available anymore as the PR was closed or merged._<|||||>Seems like a very needed feature! what is the status? was this functionality tested? <|||||>And as mentioned by @samuelazran we should add at least one test, if possible comparing with the original masking (if openAI added it to their codebase) otherwise an integration test.<|||||>I was waiting for the validation of basic functions to continue the further work. Thanks for the comments! Will finish the rest <|||||>Hi @ArthurZucker, do you have any suggestions of how to differentiate train and validation/test sets in order to only augment train set ? In my mind, we perhaps need to add SpecAugment related parameters to the `__call__` function of `WhisperFeatureExtractor`, then update training example script here https://github.com/huggingface/transformers/blob/2411f0e465e761790879e605a4256f3d4afb7f82/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py#L428-L447 to ```python def prepare_dataset(batch, **kwargs): # process audio sample = batch[audio_column_name] inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"], **kwargs) # process audio length batch[model_input_name] = inputs.get(model_input_name)[0] batch["input_length"] = len(sample["array"]) # process targets input_str = batch[text_column_name].lower() if do_lower_case else batch[text_column_name] batch["labels"] = tokenizer(input_str).input_ids return batch with training_args.main_process_first(desc="dataset map pre-processing"): vectorized_datasets = DatasetDict() if training_args.do_train: # NB: also add SpecAugment parameters to DataTrainingArguments vectorized_datasets["train"] = raw_datasets["train"].map( lambda example: prepare_dataset( example, apply_spec_augment=data_args.apply_spec_augment, mask_time_prob=data_args.mask_time_prob, mask_feature_prob=data_args.mask_feature_prob, ), remove_columns=next(iter(raw_datasets.values())).column_names, num_proc=data_args.preprocessing_num_workers, desc="preprocess train dataset", ) if training_args.do_eval: vectorized_datasets["eval"] = raw_datasets["eval"].map( prepare_dataset, remove_columns=next(iter(raw_datasets.values())).column_names, num_proc=data_args.preprocessing_num_workers, desc="preprocess eval dataset", ) ``` Also cc @sanchit-gandhi :)<|||||>I think I am in favor of just adding the `do_spec_augment` argument in the call of the feature extractor, which will default to `False`. The processing of training and validation should indeed be taken care of outside of the modelling.<|||||>Hey @bofenghuang, Really cool to see this new feature addition for SpecAug! Could well provide a nice boost for Whisper fine-tuning πŸš€ Not sure I fully agree that we should add SpecAug to the feature extractor. IMO it's a regularisation technique that belongs in the modelling file which is in many ways analogous to dropout (we wouldn't ever add dropout to the feature extractor - this is a method that relates to the modelling code and thus we add it there). Adding SpecAug to the feature extractor causes two problems: 1. We pre-process our training dataset once at the start of training to obtain our log-Mel spectrograms. Using SpecAug in our feature extractor means that we generate a **fixed set** of masked features in these spectrograms. If we train for multiple epochs, we re-use our pre-processed dataset, and so have the **same** masked features for each epoch. This is analogous to dropping out the same nodes each time we do dropout -> the model will fit to these fixed SpecAug features, defeating the point of using this regularisation technique! What we actually want to do is mask **different** features in our spectrograms each time we use the data, i.e. mask in a stochastic way. 2. We need different pre-processing logic for our train/eval sets. We need to 'turn on' SpecAug for the train set and 'turn off' SpecAug for the eval set. Both of these problems are bypasses by putting SpecAug in the modelling file: 1. We mask a different set of features at each forward pass in a stochastic way ('true' form of dropout) 2. We only apply SpecAug when we train, which we can access with the attribute `self.training`. See: https://github.com/huggingface/transformers/blob/f0fc7912980234f3711b261f13b4e77fa7a43fb5/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1253-L1254 So if it's ok with you I think we should modify this PR to move the SpecAug logic to the modelling file!<|||||>Oh I see thanks for thinking this far @sanchit-gandhi ! You are indeed right πŸ‘πŸ» Sorry @bofenghuang for missleading you πŸ˜… <|||||>Hi @sanchit-gandhi, Thanks and totally agree with you! I've put it in the feature extractor just because it's a numpy version. I think we perhaps need to re-write it to pytorch if we want to have it in modeling? cc @ArthurZucker <|||||>Think we can apply the same logic that we do in Wav2Vec2 and compute the mask using NumPy (no matmuls here, simply building a binary array of indices to mask/not mask in a stochastic way) and apply the mask in PyTorch to our tensors (hidden states). So `_compute_mask_indices` is NumPy: https://github.com/huggingface/transformers/blob/071529bd548c52b27d3a3d9414db086692b37d2f/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L133 And `_mask_hidden_states` PyTorch: https://github.com/huggingface/transformers/blob/071529bd548c52b27d3a3d9414db086692b37d2f/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1232 You can probably copy these two methods directly from `modeling_wav2vec2.py` and apply the masking as required to the `input_features` in Whisper!
transformers
21,062
closed
Fixed issue #21039
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #21039 When using AutoModelForCausalLM.from_pretrained(..., low_cpu_mem_usage=True), for some models (with modified configs) are having problems loading their weights from the model_state_dict, this PR solves that. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-09-2023 09:01:41
01-09-2023 09:01:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, @sgugger I solved the `check_code_quality` test but the `tests_torch` is still giving me error, I ran the whole test locally which was failing before(by checking from this [link](https://app.circleci.com/pipelines/github/huggingface/transformers/55152/workflows/03ea844b-2a40-4142-ab12-f7378c66ea5f/jobs/665222)) and also ran the specific test locally(tests/models/auto/test_modeling_auto.py) which was causing the error, both seem to run perfectly fine in my local system. (I also updated the environment before running them locally). Would you please look in this matter? I can't seem to find the problem why tests are failing..... <|||||>Hi, @sgugger I did all the changes you mentioned, and all the checks are successful now.
transformers
21,061
closed
Force_download=True not working, `No such file or directory: './.cache/models--togethercomputer--GPT-JT-6B-v1/refs/main'`
### System Info - `transformers` version: 4.25.1 - Platform: Linux-5.15.0-57-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @younesbelkada @ArthurZucker ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Got the error while running the following command: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("togethercomputer/GPT-JT-6B-v1", cache_dir='./.cache/') ``` ### Expected behavior Should download the model anew rather than looking in the cache
01-09-2023 08:19:30
01-09-2023 08:19:30
Hi, @bhavnicksm can you provide more information about your system (by using `transformers-cli env` in terminal) so that it will be easier to reproduce the code? <|||||>Can you also tell us if there is folder `./.cache/` where you execute this code? Reading the error, it might simply be because the cache folder was not properly created.<|||||>@sgugger The cache folder gets created properly and there's the folder of that name, but the folder is not populated with any files. It's just empty.<|||||>Hi @susnato updated the original issue with the relevant information. Thanks for the suggestion to use `transformers-cli`. <|||||>I ran your code snippet and cannot reproduce your issue. I also don't understand why that snippet of code should download anything anew and not look at the cache.<|||||>@sgugger the issue has been resolved, thanks for looking into it! πŸ«‚ I believe it was a connection issue with the servers because even SentenceTransformers was giving a error but something about how it couldn't connect to HF. They both started to work at the same time a few hours later. About the logic in the reproduction code, the default cache path wasn't working so providing another cache path with `force_download=True` might make it download again. Nevermind since it's been resolved πŸ˜„
transformers
21,060
closed
add GPTSAN model
# Model description GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM in the T5 paper, and works with both Test Generation and Masked Language Model. To add this model to the transformer, I did the following: Porting GPTSAN to PyTorch. Model conversion. Creating model cards in HuggingFace Hub. Porting generation code. The model card has already been uploaded. (https://huggingface.co/Tanrei/GPTSAN-japanese/) Tokenizer uses GPT-NeoX-Japanese, and only new vocabulary files are uploaded to the model card. Minor differences are absorbed within the generation algorithm in the model's source code. GPTSAN repository is: https://github.com/tanreinama/GPTSAN Discussion of HuggingFace integration is: https://github.com/tanreinama/GPTSAN/issues/2 Thanks to: @ArthurZucker and @younesbelkada
01-09-2023 07:54:10
01-09-2023 07:54:10
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21060). All of your documentation changes will be reflected on that endpoint.<|||||>generate() now works with greedy_gen_mode, but I want contrastive_search to be the default. Is there any reference code somewhere for that?<|||||>Yes, contrastive search should be supported in `transformers` but I think you need to tweak the caching mechanism. Maybe @gante can help here as I am not really sure πŸ™ <|||||>ok. I was confused about contrastive_search. This will work almost fine. ``` model.config.use_cache=True model.config.do_sample=True c = model.generate(x_tok, logits_processor=LogitsProcessorList([TopKLogitsWarper(120)])) ``` I would like to override _get_logits_processor and add TopKLogitsWarper to the default logs_processor. ``` logits_processor = super()._get_logits_processor(...) if generation_config.top_k is not None: logits_processor.append(TopKLogitsWarper(generation_config.top_k)) return logits_processor ``` There was also a misunderstanding about the caching mechanism. I thought that cache saves everything up to the last time, and that SequenceLength is 1 every time forward is called, but it seems that's not the case. I can make it compatible.<|||||>@tanreinama @younesbelkada contrastive search _should_ work out of the box if the model uses the usual caching mechanism. Prefix LM models are not the case, sadly (it's probably the same issue as GIT, which is also a prefix LM model) πŸ˜… I'd suggest to skip contrastive search for now, and to fix it in a subsequent PR (skip = skip tests and override `contrastive_search` such that an informative exception is thrown). I should be able to give better advice after I see what's happening with GIT :)<|||||>@ArthurZucker @younesbelkada I have committed some updates in response to your comments. In unit tests, the `wav2vec2_with_lm` module is causing errors. Is this due to conflicts? I didn't touch the this package...<|||||>The failing test is not related to you! If you pull from main it might get resolved! However you have `FAILED tests/models/gptsan_japanese/test_modeling_gptsan_japanese.py::GPTSANJapaneseForConditionalGenerationTest::test_logits - Failed: Timeout >120.0s` which means either the test should be marked as `#slow` or there is an issue int his test πŸ‘πŸ» <|||||>I sync and pull from main so it closed automatically. I'll create new PR from merged code. thx,
transformers
21,059
closed
Can the transformer models run without any local storage at all?
### Feature request We have a use case where we'd like to download transformer models from our S3 or other storage location directly into memory (without saving it in local storage), finetune the model and save the final model directly to the remote storage through an API. We're wondering if this use case of not using local storage at all is possible using the current library? ### Motivation Our usecase requires minimizing local storage usage. ### Your contribution We are trying to figure out if this feature is already supported
01-09-2023 07:04:43
01-09-2023 07:04:43
This is indeed to supported by the library.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,058
closed
`rank = dist.get_rank()` throws `group error` while loading model with running `AutoModelForSeq2SeqLM.from_pretrained` using deepspeed
### System Info - `transformers` version: 4.25.1 - Platform: Linux-3.10.0-514.26.2.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.5 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.8.0a0+1606899 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: yes(deepseed) ### Who can help? @stas00 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm trying to test running `examples/pytorch/translation/run_translation.py` with `deepspeed`, using this [example](https://github.com/huggingface/transformers/issues/17534#issuecomment-1146249686) @stas00 had written (thanks beforehand) - script I've run ``` rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 4 \ examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \ --output_dir output_dir --overwrite_output_dir --max_source_length 128 \ --max_target_length 128 --val_max_target_length 128 --do_train \ --num_train_epochs 1 --learning_rate 3e-3 \ --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro \ --source_prefix 'translate English to Romanian: ' --max_train_samples 5 \ --deepspeed tests/deepspeed/ds_config_zero3_test.json --save_steps 5 ``` - `ds_config_zero3_test.json` - I changed `gradient_accumulation_steps, train_batch_size, train_micro_batch_size_per_gpu` values from `auto` to some int values, since `auto` value threw out error such as `check batch related parameters. train_batch_size is not equal to micro_batch_per_gpu * gradient_acc_step * world_size 8 != 2 * 1 * 1` ``` %%bash cat <<'EOT' > ds_config_zero3_test.json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "cpu", "pin_memory": true }, "offload_param": { "device": "cpu", "pin_memory": true }, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_fp16_weights_on_model_save": true }, "gradient_accumulation_steps": 1, "gradient_clipping": "auto", "steps_per_print": 2000, "train_batch_size": 2, "train_micro_batch_size_per_gpu": 2, "wall_clock_breakdown": false } EOT ``` - ERROR message - An error occurs while loading model, getting rank info using `torch.distributed.get_rank`, throwing `RuntimeError: The given group does not exist` - Maybe it's because I'm using older version of PyTorch? (`1.8.0a0+1606899`). ``` [2023-01-09 00:34:14,932] [INFO] [partition_parameters.py:709:__init__] _all_gather_base API is not available in torch 1.8.0a0+1606899 Traceback (most recent call last): File "examples/pytorch/translation/run_translation.py", line 660, in <module> main() File "examples/pytorch/translation/run_translation.py", line 374, in main model = AutoModelForSeq2SeqLM.from_pretrained( File "/home/transformers/src/transformers/models/auto/auto_factory.py", line 463, in from_pretrained return model_class.from_pretrained( File "/home/transformers/src/transformers/modeling_utils.py", line 2299, in from_pretrained with ContextManagers(init_contexts): File "/home/transformers/src/transformers/utils/generic.py", line 359, in __enter__ self.stack.enter_context(context_manager) File "/opt/conda/lib/python3.8/contextlib.py", line 425, in enter_context result = _cm_type.__enter__(cm) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 400, in __enter__ print_rank_0( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 49, in print_rank_0 rank = dist.get_rank() File "/opt/conda/lib/python3.8/site-packages/deepspeed/comm/comm.py", line 575, in get_rank return cdb.get_rank(group) File "/opt/conda/lib/python3.8/site-packages/deepspeed/comm/torch.py", line 175, in get_rank return torch.distributed.get_rank(group=group) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 645, in get_rank return _get_group_rank(group, _default_pg.rank()) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 191, in _get_group_rank raise RuntimeError("The given group does not exist") RuntimeError: The given group does not exist ``` - `ds_report` ``` -------------------------------------------------- DeepSpeed C++/CUDA extension op report -------------------------------------------------- NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op. -------------------------------------------------- JIT compiled ops requires ninja ninja .................. [OKAY] -------------------------------------------------- op name ................ installed .. compatible -------------------------------------------------- cpu_adam ............... [NO] ....... [OKAY] cpu_adagrad ............ [NO] ....... [OKAY] fused_adam ............. [NO] ....... [OKAY] fused_lamb ............. [NO] ....... [OKAY] [WARNING] please install triton==1.0.0 if you want to use sparse attention sparse_attn ............ [NO] ....... [NO] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] [WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. async_io ............... [NO] ....... [NO] utils .................. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] transformer_inference .. [NO] ....... [OKAY] spatial_inference ...... [NO] ....... [OKAY] -------------------------------------------------- DeepSpeed general environment info: torch install path ............... ['/opt/conda/lib/python3.8/site-packages/torch'] torch version .................... 1.8.0a0+1606899 torch cuda version ............... 11.1 torch hip version ................ None nvcc version ..................... 11.1 deepspeed install path ........... ['/opt/conda/lib/python3.8/site-packages/deepspeed'] deepspeed info ................... 0.7.7, unknown, unknown deepspeed wheel compiled w. ...... torch 1.8, cuda 11.1 ``` ### Expected behavior I've expected it to run with multiple gpus but not running
01-09-2023 04:16:29
01-09-2023 04:16:29
That traceback doesn't look like an issue in the HF integration, indeed could you try some more recent pytorch first? You're not even using a released version of 1.8, but some nightly/rc version (`torch version .................... 1.8.0a0+1606899`). I'd try 1.12 or 1.13 (latest). Please let me know if it doesn't help and I will try to reproduce it.<|||||>> That traceback doesn't look like an issue in the HF integration, indeed could you try some more recent pytorch first? You're not even using a released version of 1.8, but some nightly/rc version (`torch version .................... 1.8.0a0+1606899`). I'd try 1.12 or 1.13 (latest). > > Please let me know if it doesn't help and I will try to reproduce it. I've changed the environment to following setting - `transformers` version: 4.22.2 - Platform: Linux-4.19.93-1.nbp.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.13.0a0+08820cb (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed And I succeeded running the script:) Thank you Although, I've noticed that when using deepspeed with huggingface Trainer, Training info gives `Number of trainable parameters` as zero ``` [INFO|trainer.py:1643] 2023-01-10 04:55:31,025 >> ***** Running training ***** [INFO|trainer.py:1644] 2023-01-10 04:55:31,025 >> Num examples = 10 [INFO|trainer.py:1645] 2023-01-10 04:55:31,025 >> Num Epochs = 3 [INFO|trainer.py:1646] 2023-01-10 04:55:31,025 >> Instantaneous batch size per device = 8 [INFO|trainer.py:1647] 2023-01-10 04:55:31,025 >> Total train batch size (w. parallel, distributed & accumulation) = 16 [INFO|trainer.py:1648] 2023-01-10 04:55:31,025 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1649] 2023-01-10 04:55:31,025 >> Total optimization steps = 3 [INFO|trainer.py:1650] 2023-01-10 04:55:31,028 >> Number of trainable parameters = 0 ``` The same issue is here(https://discuss.huggingface.co/t/deepspeed-with-trainer-no-of-trainable-parameters-coming-to-be-0/27187). Thank you in advance @stas00:) I really appreciate your kindness<|||||>great to hear that it worked, @SoundProvider > Although, I've noticed that when using deepspeed with huggingface Trainer, Training info gives Number of trainable parameters as zero That means that the params weren't gathered under zero3. and when zero3 is used deepspeed puts placeholders with tensors of zero3. Please create a new issue and I will fix it. or if you feel inspired you can contribute a few lines of code that will check if the model is running under deepspeed and gather the params. It'd be something like this: ``` if is_deepspeed_zero3_enabled(): import deepspeed size = 0 for param in model.parameters(): with deepspeed.zero.GatheredParameters(param, modifier_rank=None): size += param.numel() ``` we do it one param at a time to avoid loading a potentially huge model onto cpu.<|||||>I'd love to try it out. I will go through some the codes and make a new issue if I find a way:) Thank you<|||||>Wonderful! The other service we could provide to future users is to find out which minimal pt version is required to make it work and assert if it's not the right one - in case you're interested to explore that one - but by all means this is only an invitation, please feel no pressure to do anything unless it gives you goosebumps when you think of doing it.<|||||>@stas00 Hello Stas. I've tested running two different models with both deepspeed and torch DDP. As you can see below, t5-large with deepspeed uses much less GPU memory than torch DDP, while OPT model with deepspeed doesn't show useful decrease. I've looked through deepspeed codes couldn't find any hints,, I have 2 questions - What would cause the differenct GPU memory decrease between two models? From what I've understood from [deepspeed.initialize](https://github.com/microsoft/DeepSpeed/blob/fe728e3ed880f27de2c21234f12b7aa6f672e825/deepspeed/runtime/pipe/engine.py#L138), deepspeed handles only tensors, not model blocks. - After what I read from[ deepspeed memory efficiency](https://www.deepspeed.ai/training/#memory-efficiency), I expected t5-large with deepspeed would show much more GPU memory decrease than I tested. Could you tell me any hint? Thank you beforehand for your great works ### Experiments 1. t5-large - deepspeed - script: `rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 2 \ examples/pytorch/translation/run_translation.py --model_name_or_path t5-large \ --output_dir output_dir --overwrite_output_dir --max_source_length 128 \ --max_target_length 128 --val_max_target_length 128 --do_train \ --num_train_epochs 1 --learning_rate 3e-3 \ --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro \ --source_prefix 'translate English to Romanian: ' --max_train_samples 5 \ --deepspeed tests/deepspeed/ds_config_zero3_NSML_test.json --per_device_train_batch_size 1` - ![image](https://user-images.githubusercontent.com/48939336/212818170-69ec13ab-bceb-4fbc-9099-2b7e72b08e48.png) - torch DDP - script: `python -m torch.distributed.launch --nproc_per_node=2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-large \ --output_dir output_dir --overwrite_output_dir --max_source_length 128 \ --max_target_length 128 --val_max_target_length 128 --do_train \ --num_train_epochs 10 --learning_rate 3e-3 \ --dataset_name wmt16 --dataset_config ro-en --source_lang en --target_lang ro \ --source_prefix 'translate English to Romanian: ' --max_train_samples 5 --per_device_train_batch_size 1` - ![image](https://user-images.githubusercontent.com/48939336/212818484-ca76c498-7725-4b31-ae2c-2f25e4633462.png) 2. OPT - model: [link](https://github.com/huggingface/transformers/tree/main/src/transformers/models/opt), version: 4.26.0.dev0 - used [OPTForCausalLM](https://github.com/huggingface/transformers/blob/2411f0e465e761790879e605a4256f3d4afb7f82/src/transformers/models/opt/modeling_opt.py#L808), with custom dataset - deepspeed - script: `rm -r output_dir; PYTHONPATH=src USE_TF=0 deepspeed --num_gpus 2 \ run_opt.py --model_name_or_path facebook/opt-1.3b --output_dir test \ --deepspeed ../tests/deepspeed/ds_config_zero3_NSML_test.json --do_train True --do_eval True \ --per_device_train_batch_size 1` - ![image](https://user-images.githubusercontent.com/48939336/212819257-6da9ef94-21a0-4d7c-8b4a-3f759588df77.png) - torch DDP - script: `rm -r test; python -m torch.distributed.launch --nproc_per_node=2 run_opt.py --model_name_or_path facebook/opt-1.3b --output_dir test \ --do_train True --do_eval True --per_device_train_batch_size 1` - ![image](https://user-images.githubusercontent.com/48939336/212819377-74b3bd99-0036-4c61-8995-1156d897dfcd.png) #### env info - `transformers` version: 4.26.0.dev0 - Platform: Linux-4.19.93-1.nbp.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.11.0 - PyTorch version (GPU?): 1.13.0a0+08820cb (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed <|||||>I won't trust `nvidia-smi` for measuring memory usage patterns, as it is not aware of cuda caching and you can't see peak memory usage either. You can repeat the above runs, but add `--skip_memory_metrics 0` and it'll print you all the memory usage stats at the end of each run. (only use this for debug as it slows training down) I'm not saying that you still won't see an issue, but I'm asking to do that as it'd give us a much precise memory usage stats. and ideally please make it into a new Issue and let's close this one. As this discussion is now totally unrelated to the topic of this Issue. Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,057
closed
Whisper decoding returns exception about outputs.logits shape
### System Info `transformers` version: 4.26.0.dev0 - Platform: Linux-5.10.0-20-amd64-x86_64-with-glibc2.31 - Python version: 3.9.2 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> Same error on cuda servers ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Run simple decoding with Whisper large: ``` speech_array, sampling_rate = torchaudio.load(fn) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) sound = resampler(speech_array).squeeze().numpy() input_features = processor(sound, return_tensors="pt", sampling_rate=16_000).input_features with torch.no_grad(): generated_ids = model.generate(inputs=input_features, max_length=1000) transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` Result is an exception: ``` Traceback (most recent call last): File "/home/user/test_whisper_hf.py", line 37, in <module> generated_ids = model.generate(inputs=input_features, max_length=1000) File "/home/user/.local/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/user/.local/lib/python3.9/site-packages/transformers-4.26.0.dev0-py3.9.egg/transformers/generation/utils.py", line 1352, in generate return self.greedy_search( File "/home/user/.local/lib/python3.9/site-packages/transformers-4.26.0.dev0-py3.9.egg/transformers/generation/utils.py", line 2135, in greedy_search next_token_logits = outputs.logits[:, -1, :] IndexError: index -1 is out of bounds for dimension 1 with size 0 ``` The output on this problematic file is ``` Seq2SeqLMOutput(loss=None, logits=tensor([], size=(1, 0, 51865)), past_key_values=((tensor([[[[ 1.3006e+00, -4.4066e-02, -2.5518e-02, ..., 1.6218e-01, ``` This happens only with a single file in the dataset of 10k files. ### Expected behavior No exception
01-09-2023 00:19:25
01-09-2023 00:19:25
cc @ArthurZucker <|||||>Hey! Could you provide a reproducing script with the dataset? The file might be corrupted.<|||||>To reproduce you can try this code ``` #!/usr/bin/env python3 from transformers import WhisperProcessor, WhisperForConditionalGeneration import torch import torchaudio processor = WhisperProcessor.from_pretrained("mitchelldehaven/whisper-large-v2-ru") model = WhisperForConditionalGeneration.from_pretrained("mitchelldehaven/whisper-large-v2-ru") speech_array, sampling_rate = torchaudio.load("test.wav") resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) sound = resampler(speech_array).squeeze().numpy() input_features = processor(sound, return_tensors="pt", sampling_rate=16_000).input_features with torch.no_grad(): generated_ids = model.generate(inputs=input_features, max_length=1000) transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] ``` with the attached file [test.zip](https://github.com/huggingface/transformers/files/10386796/test.zip) This thing happens with fine-tuned models between, not original ones.<|||||>I have the same issue. Model is not finetuned Could you find a workaround @nshmyrev ?<|||||>In this case, using an original model works: ```python from transformers import WhisperProcessor, WhisperForConditionalGeneration import torchaudio import torch fn = "/home/arthur_huggingface_co/transformers/Arthur/test.wav" processor = WhisperProcessor.from_pretrained("openai/whisper-large") model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-large") speech_array, sampling_rate = torchaudio.load(fn) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) sound = resampler(speech_array).squeeze().numpy() input_features = processor(sound, return_tensors="pt", sampling_rate=16_000).input_features with torch.no_grad(): generated_ids = model.generate(inputs=input_features, max_length=1000) transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print(transcription) ``` I get ```python Duh duh duh duh uh huh. ``` When running with your model however, it seems that the `max_len` parameter is not taken into account, and the `input_ids` have a length of `449` which provokes the error. The model should stop. This can be caused because of various things, but I recommend setting the `max_length` to `448` as the model should not be fed with larger inputs. (it is the case for the original models. @RuABraun can you share the audio and a reproduction script? <|||||>I fixed it by lowering max_length. Thanks
transformers
21,056
closed
I have a problem trained model with tensorflow on transformer pipeline male error
<details><summary>Click to expand!</summary> ### Issue Type Bug ### Have you reproduced the bug with TF nightly? Yes ### Source source ### Tensorflow Version 2.8 ### Custom Code Yes ### OS Platform and Distribution _No response_ ### Mobile device _No response_ ### Python version _No response_ ### Bazel version _No response_ ### GCC/Compiler version _No response_ ### CUDA/cuDNN version _No response_ ### GPU model and memory _No response_ ### Current Behaviour? i’m using this github text summarization and I have a problem. I have been struggling for two week and I could not figure that out. im using a notebook from this github repository: https://github.com/flogothetis/Abstractive-Summarization-T5-Keras notebook link: https://github.com/flogothetis/Abstractive-Summarization-T5-Keras/blob/main/AbstractiveSummarizationT5.ipynb after train model i wanna use huggingface transformer pipe line to generate summerization **from transformers import pipeline summarizer = pipeline(β€œsummarization”, model=model, tokenizer=β€œt5-small”, framework=β€œtf”) summarizer(β€œsome text”)** but it pop out an error: **AttributeError: β€˜Functional’ object has no attribute 'config’** Anyone has any idea how can i solve it? full error: AttributeError Traceback (most recent call last) /tmp/ipykernel_20/1872405895.py in ----> 1 summarizer = pipeline(β€œsummarization”, model=model, tokenizer=β€œt5-small”, framework=β€œtf”) 2 3 summarizer(β€œThe US has passed the peak on new coronavirus cases, President Donald Trump said and predicted that some states would reopen”) /opt/conda/lib/python3.7/site-packages/transformers/pipelines/init.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, use_auth_token, model_kwargs, **kwargs) 432 break 433 β†’ 434 return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs) /opt/conda/lib/python3.7/site-packages/transformers/pipelines/text2text_generation.py in init(self, *args, **kwargs) 37 38 def init(self, *args, **kwargs): β€”> 39 super().init(*args, **kwargs) 40 41 self.check_model_type( /opt/conda/lib/python3.7/site-packages/transformers/pipelines/base.py in init(self, model, tokenizer, modelcard, framework, task, args_parser, device, binary_output) 548 549 # Update config with task specific parameters β†’ 550 task_specific_params = self.model.config.task_specific_params 551 if task_specific_params is not None and task in task_specific_params: 552 self.model.config.update(task_specific_params.get(task)) AttributeError: β€˜Functional’ object has no attribute 'config’ ``` ### Standalone code to reproduce the issue ```shell summarizer = pipeline(β€œsummarization”, model=model, tokenizer=β€œt5-small”, framework=β€œtf”) summarizer(β€œsome text”) but it pop out an error: AttributeError: β€˜Functional’ object has no attribute 'config’ ``` ### Relevant log output _No response_</details>
01-08-2023 23:47:24
01-08-2023 23:47:24
You are using a Keras model here, but the `pipeline` can only deal with `TFPreTrainedModel`s (models of the Transformers library).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,055
closed
Add Spanish translation to community.mdx
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds Spanish translation to community.mdx Fixes #15947 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @osanseviero @omarespejel @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-08-2023 22:37:29
01-08-2023 22:37:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,054
closed
X-CLIP and other video classification models can't be loaded into CUDA GPU for inference without crashing the kernel/process
### System Info Originally: - `transformers` version: 4.25.1 (also tried 4.26.0-dev directly from the GitHub main branch) - Platform: Linux-6.0.12-76060006-generic-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no Then, given [this comment](https://github.com/microsoft/VideoX/issues/57#issuecomment-1283627674) in the X-CLIP issues, I also tried: - `transformers` version: 4.25.1 - Platform: Linux-6.0.12-76060006-generic-x86_64-with-glibc2.35 - Python version: 3.8.16 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.8.0+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help? @NielsRogge tagging you since you've added the code for X-CLIP to the library and also commented in the X-CLIP issue I've mentioned above. ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. I have first copied [the example code](https://huggingface.co/docs/transformers/main/en/model_doc/xclip#transformers.XCLIPModel.forward.example) provided in the library documentation, which worked. 2. Then I've extended my notebook to process data from my (currently) private dataset, but still following exactly the example code. This is where I've noticed that the inference took a few seconds, so... 3. I have compiled [decord](https://github.com/dmlc/decord) from source, which allowed me to run the data processing on the GPU. This worked, but it didn't provide any performance improvement, so I reverted to the PyPI version. 4. I tried manually moving the model to the GPU, with `model.to("cuda")`, `model.to("cuda:0")`, `model.to(torch.device("cuda"))` and `model.cuda()`. All of these make the Jupyter Lab kernel crash with no error in the logs. If reloaded, the model still works, but only runs on CPU. 5. I also tried replacing `XClipModel` with other video classification models, such as [`TimesformerForVideoClassification`](https://huggingface.co/docs/transformers/main/en/model_doc/timesformer#transformers.TimesformerForVideoClassification). Since this model is not included in the stable release yet, I uninstalled transformers v4.25.1 and installed the current main branch (v4.26.0-dev). This still only ran on CPU and refused to work on GPU. 6. I have then found [this comment](https://github.com/microsoft/VideoX/issues/57#issuecomment-1283627674) about my exact problem in the microsoft/VideoX issues, saying they solved it by downgrading to PyTorch 1.8.0, which I did (from 1.13.0) after also downgrading Python (from 3.10 to 3.8 due to PyTorch compatibility). With this change, instantiating the model made the kernel crash immediately. My guess is that between PyTorch 1.8.0 and 1.13.0 a fallback to the CPU if the model couldn't be loaded into GPU was introduced. Other details: - Linux distro: Pop!_OS 22.04 - CPU: Ryzen 5 5600X - GPU: NVIDIA RTX 3090 - RAM: 16GB (even though limited, the model which I'm trying to load (microsoft/xclip-base-patch16-zero-shot) should fit with no problem) - NVIDIA driver 525.60.11 - CUDA 11.2 (installed with the `system76-cuda-latest` metapackage) -- even though `nvidia-smi` reports CUDA 12.0, could this be an issue? ### Expected behavior The model should be loaded into the GPU automatically, like other models that currently work flawlessly for me such as BART. At least, manually moving the model to the GPU should work without segfaulting.
01-08-2023 21:50:37
01-08-2023 21:50:37
After trying to revert the driver back from 525 to 515 and installing CUDA with other methods, such as by specifying `cuda_toolkit=11.7` in the `conda` installation (I originally just used `venv`), I've found out that my original configuration (under "Other details" above) worked if I did not import decord. The issue is given by simply importing decord (`import decord` is sufficient) before trying to move the model to the GPU. Unfortunately, since decord is written in C at its core, the Python process simply segfaults without error. I'm now using pyAV and everything works as expected. I'm closing this issue and opening another one to ask for updated docs without decord, this has cost me a lot of debugging time for something simple but undocumented, so I hope to save that time to other users.<|||||>Hi @e-caste, Thanks a lot for investigating. I'm not able to reproduce this in [Google Colab](https://colab.research.google.com/drive/1SMc0zW_zfp8j-iiasUh3CeJMY2i1NlPu?usp=sharing), which at the moment has PyTorch 1.13. I'm using the main branch of Transformers. The model seems correctly placed on the GPU (which I confirmed by running `nvidia-smi` and seeing whether the memory is occupied). I'll ping @nateraw here as he has been looking into several video decoding libraries, we should of course take one that works as intended. From [this thread](https://github.com/huggingface/datasets/issues/5225), we're currently in favor of using PyAV.<|||||>@NielsRogge I'm not sure if literally the `import decord` line is enough for the kernel to crash (I've tested a lot of things and I can't remember), but I'm sure that the import line in the docs (`from decord import VideoReader, cpu` -- I had it before `import torch` and `from transformers import XCLIPProcessor, XCLIPModel`) made it crash.
transformers
21,053
closed
Token embedding resizing does not work for TFGPT2Model
### System Info - `transformers` version: 4.25.1 - Platform: Linux-5.15.0-57-generic-x86_64-with-glibc2.35 - Python version: 3.9.16 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @gante and @Rocketknight1 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction After `add_special_tokens` to tokenizer and `resize_token_embeddings` on `TFGPT2Model`, evaluating the model results in an error that indicates that the embeddings are not resized as expected. Please see the example code and the execution output below: ``` from transformers import GPT2Tokenizer, TFGPT2Model SPECIAL_TOKENS_MAPPING = { 'bos_token': '<bos>', 'eos_token': '<eos>', 'pad_token': '<pad>', 'additional_special_tokens': ['<speaker1>', '<speaker2>'] } tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = TFGPT2Model.from_pretrained("gpt2") print("Evaluating TFGPT2Model BEFORE extending the tokenizer and model with additional tokens ...") inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") print(f"inputs = \n{inputs}\n") outputs = model(inputs) print(f"DONE!") print("Adding tokens...") orig_num_tokens = len(tokenizer.get_vocab()) num_special_tokens = tokenizer.add_special_tokens(SPECIAL_TOKENS_MAPPING) print(f"orig_num_tokens = {orig_num_tokens}, num_special_tokens={num_special_tokens}") model.resize_token_embeddings(new_num_tokens=orig_num_tokens + num_special_tokens) print("Evaluating TFGPT2Model AFTER extending the tokenizer and model with additional tokens ...") inputs = tokenizer("<speaker1>Hello, my dog is cute<speaker2>I agree!", return_tensors="tf") print(f"inputs = \n{inputs}\n") outputs = model(inputs) print(f"DONE!") ``` ``` Evaluating TFGPT2Model BEFORE extending the tokenizer and model with additional tokens ... inputs = {'input_ids': <tf.Tensor: shape=(1, 6), dtype=int32, numpy=array([[15496, 11, 616, 3290, 318, 13779]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(1, 6), dtype=int32, numpy=array([[1, 1, 1, 1, 1, 1]], dtype=int32)>} DONE! Adding tokens... orig_num_tokens = 50257, num_special_tokens=5 Evaluating TFGPT2Model AFTER extending the tokenizer and model with additional tokens ... inputs = {'input_ids': <tf.Tensor: shape=(1, 11), dtype=int32, numpy= array([[50260, 15496, 11, 616, 3290, 318, 13779, 50261, 40, 4236, 0]], dtype=int32)>, 'attention_mask': <tf.Tensor: shape=(1, 11), dtype=int32, numpy=array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=int32)>} Traceback (most recent call last): File "/home/freddy/workspace/Nuhame/mlpug/examples/chatbot/tensorflow/test_tf_resize_token_size.py", line 33, in <module> outputs = model(inputs) File "/home/freddy/.virtualenvs/mlpug-tf/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler raise e.with_traceback(filtered_tb) from None File "/home/freddy/.virtualenvs/mlpug-tf/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 432, in run_call_with_unpacked_inputs return func(self, **unpacked_inputs) File "/home/freddy/.virtualenvs/mlpug-tf/lib/python3.9/site-packages/transformers/models/gpt2/modeling_tf_gpt2.py", line 773, in call outputs = self.transformer( File "/home/freddy/.virtualenvs/mlpug-tf/lib/python3.9/site-packages/transformers/modeling_tf_utils.py", line 432, in run_call_with_unpacked_inputs return func(self, **unpacked_inputs) File "/home/freddy/.virtualenvs/mlpug-tf/lib/python3.9/site-packages/transformers/models/gpt2/modeling_tf_gpt2.py", line 447, in call tf.debugging.assert_less( tensorflow.python.framework.errors_impl.InvalidArgumentError: Exception encountered when calling layer 'transformer' (type TFGPT2MainLayer). input_ids must be smaller than the embedding layer's input dimension (got 50261 >= 50257) Condition x < y did not hold. First 3 elements of x: [50260 15496 11] First 1 elements of y: [50257] Call arguments received by layer 'transformer' (type TFGPT2MainLayer): β€’ input_ids=tf.Tensor(shape=(1, 11), dtype=int32) β€’ past_key_values=None β€’ attention_mask=tf.Tensor(shape=(1, 11), dtype=int32) β€’ token_type_ids=None β€’ position_ids=None β€’ head_mask=None β€’ inputs_embeds=None β€’ encoder_hidden_states=None β€’ encoder_attention_mask=None β€’ use_cache=True β€’ output_attentions=False β€’ output_hidden_states=False β€’ return_dict=True β€’ training=False ``` ### Expected behavior The model should have 50257 + 5 = 50262 embeddings after resizing and thus an input ID with value 50261 should not result in any errors. The above code should run without errors.
01-08-2023 21:33:25
01-08-2023 21:33:25
@visionscaper thank you for raising the issue! It is a generalized problem with this check, which should only rely on the config's vocab size (which is the only reliable source of the actual vocabulary size at any given moment). @susnato opened a fix for GPT2, but other models will also need a fix as well<|||||>(@susnato -- I've assigned this issue to me so it doesn't get forgotten, but I'm counting on your aid πŸ˜‰ )<|||||>Hi @gante I have been hit by the same issue! Namely, after having added new tokens to the tokenizer (GPT2Tokenizer), and resized the token_embeddings of the model (TFGPT2LMHeadModel), the model.fit(...) throw errors the same as @visionscaper reported. When could you release a fix patch? Or is there a workaround solution for now? You guys are doing a great job! And your support is highly appreciated! Cheers~<|||||>Hello @gante ! Thanks for your support. I also has faced the same issue as is commented by @visionscaper and @tqye2000. Especially, I tried to check almost every TFGPT2 based pretrained models released by huggingface and figured it out that resize_token_embeddings() does not work for all of them, even including the example code written in huggingface document. Hope this error gets fixed as soon as possible ! :) EDIT) After reading the comment below: > @visionscaper thank you for raising the issue! It is a generalized problem with this check, which should only rely on the config's vocab size (which is the only reliable source of the actual vocabulary size at any given moment). > > @susnato opened a fix for GPT2, but other models will also need a fix as well I installed the source version of transformers library, which the most latestes on-going code handled by huggingface.co, rather than installing a stable distribution version. Then, resize_token_emeddings() successfully worked with TFGPT2 module ! Thanks to @gante @susnato for fixing crucial errors to Tensorflow users. :) <|||||>@tqye2000 @CHLEE-Leo Hey πŸ‘‹ Yes, the current source version has the issue fixed for TFGPT2. A new release of `transformers` should happen late next week, which will include this fix. The issue is present in other models, but hopefully will be sorted out soon as well. FYI, this issue appeared because we noticed a dangerous pattern in our embedding layers -- in TF, we can request to embed integers outside the bounds of the embedding layer and the code won't crash (returns a vector of zeros), which is extremely dangerous. I've added an out-of-bounds check, but forgot to account for the case with resized vocabulary πŸ™ƒ <|||||>Fixed on all models, thanks to @susnato 🧑 <|||||>Thanks @gante!<|||||>> Hello @gante ! Thanks for your support. I also has faced the same issue as is commented by @visionscaper and @tqye2000. Especially, I tried to check almost every TFGPT2 based pretrained models released by huggingface and figured it out that resize_token_embeddings() does not work for all of them, even including the example code written in huggingface document. Hope this error gets fixed as soon as possible ! :) > > EDIT) After reading the comment below: > > > @visionscaper thank you for raising the issue! It is a generalized problem with this check, which should only rely on the config's vocab size (which is the only reliable source of the actual vocabulary size at any given moment). > > @susnato opened a fix for GPT2, but other models will also need a fix as well > > I installed the source version of transformers library, which the most latestes on-going code handled by huggingface.co, rather than installing a stable distribution version. Then, resize_token_emeddings() successfully worked with TFGPT2 module ! Thanks to @gante @susnato for fixing crucial errors to Tensorflow users. :) Hi Could you please show me where or how could I get the latest source version of transformers? Can I get it with pip upgrade? Many thanks!<|||||>Hey @tqye2000 πŸ‘‹ You can upgrade your `transformers` installation to match the current source version with `pip install --upgrade git+https://github.com/huggingface/transformers.git`<|||||>Thank you very much, @gante! After having upgraded to the current source version, the resize_token_emeddings() seems to be working now. However I get "Allocation of 740033280 exceeds 10% of free system memory" messages. I guess this is my PC's issue. <|||||>Hi @gante May I ask another question. For fine tuning the gpt-2 model, should I pass the labels exactly the same as the inputs or should I shift the inputs by one token to create the labels? I get mixed information on the internet, some said the labels should be a copy of inputs, some examples showed the labels should be one-token shifted of the inputs. I apologise if here is not the right place for asking such questions! Many thanks! <|||||>Hey @tqye2000 -- using the best possible reference, [the code itself](https://github.com/huggingface/transformers/blob/31336dcf3f93dee19cd13c981f16982d612040d2/src/transformers/models/gpt2/modeling_gpt2.py#L1068), you can see that you *don't* need to shift the inputs. In other words, labels = inputs, all shifting happens inside the model. I hope this helps πŸ€— <|||||>Hi @gante Thank you very much for replying! Indeed I eventually dived into the code to see what's going on there and found: ` if labels is not None: # Shift so that tokens < n predict n shift_logits = lm_logits[..., :-1, :].contiguous() shift_labels = labels[..., 1:].contiguous() ` But nevertheless it is good to have your confirmation! <|||||>Hi @gante I think I still need to shift the labels by 1 token by myself. I guess this may be to do with the way I am passing the dataset to the transformer model. `dataset= tf.data.Dataset.from_tensor_slices((inputs, labels)) dataset = dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE) hist = model.fit(dataset, epochs=4) ` I just tested. If I didn't shift the labels myself, the fine tuning failed. Perhaps only if the labels is passed explicitly "labels=labels" to the model, then no need to shift beforehand. <|||||>@tqye2000 that should not be needed -- with HF models, if the label is not provided, [we try to infer it](https://github.com/huggingface/transformers/blob/73a2ff69740123ef85343580cbfa9ee8ce5e6fd5/src/transformers/modeling_tf_utils.py#L1521) (which is the case for GPT2, where labels = inputs). I'd recommend seeing our example to fine-tune models like GPT2: https://github.com/huggingface/transformers/blob/main/examples/tensorflow/language-modeling/run_clm.py (and, if it still fails, to open a new issue with a snippet where we can reproduce the problem :) )
transformers
21,052
closed
Fine-tune GIT on custom dataset [Expected input batch_size to match target batch_size]
deleted
01-08-2023 17:43:02
01-08-2023 17:43:02
@vasyza I'm doing research in swimming area and have the same issue. How to fix that?<|||||>@sgugger and other developers please help<|||||>I am not too sure how you want us to help without providing a reproducible example of the error you get.
transformers
21,051
closed
Add support for csv dataset files
null
01-08-2023 16:08:03
01-08-2023 16:08:03
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your PR, but csv datasets cannot work with the expected data format (nested dictionaries with languages).
transformers
21,050
closed
Patch-past-refactor
# What does this PR do? Should fix the test that broke `main` cc @sgugger
01-08-2023 11:12:16
01-08-2023 11:12:16
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,049
closed
Fix warning for MCTC model
# What does this PR do? In #20861, the warning introduced did not use the right direction for the test. This PR fixes that. Fixes #21031
01-08-2023 09:44:46
01-08-2023 09:44:46
Merging to fix the warning @ydshieh but can address any comment in a later PR :-) <|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21049). All of your documentation changes will be reflected on that endpoint.
transformers
21,048
closed
fix typo
# What does this PR do? Typo fix: Corrected the word metada --> metadata ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
01-08-2023 08:18:48
01-08-2023 08:18:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,047
closed
Remove Roberta Dependencies from XLM Roberta Flax and Tensorflow models
# What does this PR do? This removes Roberta dependencies from XLM Roberta Flax and Tensorflow Models. I'm a bit confused about whether the `name` parameter to `TFXLMRobertaMainLayer` should be `xml-roberta` or `xml_roberta` - I've gome with `xml-roberta` for now. i.e. `self.XLMRoberta = TFXLMRobertaMainLayer(config, name="xlm-roberta")` Fixes #19303 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
01-08-2023 00:48:48
01-08-2023 00:48:48
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks! There are a couple of places where the copy does not match the original. You can test locally with `make repo-consistency` to get the failing tests. Let me know if you need help!<|||||>HI @sgugger, thanks for the advice! I've tried `make repo-consistency` but it looks like the version of jax that was installed during `pip -e ".[dev]"` causes `RuntimeError: jaxlib is version 0.1.75, but this version of jax requires version >= 0.3.0.`<|||||>Ah, it's a problem with my original dev install. I'll reinstall and see how it goes.<|||||>I can't figure out this last copy error, could you help me out? Thanks.<|||||>I won't be able to dive more into it until next week. Running `make fix-copies` and looking at the diff will give you a clue of what the copies util wants to change.<|||||>Got it, thanks for the tip<|||||>@sgugger I'm stuck on this error message as a part of `make repo-consistency`: ``` python utils/check_inits.py Traceback (most recent call last): File "/home/ziggy/dev/transformers/utils/check_inits.py", line 299, in <module> check_all_inits() File "/home/ziggy/dev/transformers/utils/check_inits.py", line 238, in check_all_inits raise ValueError("\n\n".join(failures)) ValueError: Problem in src/transformers/__init__.py, both halves do not define the same objects. Differences for tf backend: TFXLMRobertaForCausalLM in _import_structure but not in TYPE_HINT. TFXLMRobertaPreTrainedModel in _import_structure but not in TYPE_HINT. Differences for flax backend: FlaxXLMRobertaForCausalLM in _import_structure but not in TYPE_HINT. FlaxXLMRobertaPreTrainedModel in _import_structure but not in TYPE_HINT. Problem in src/transformers/models/xlm_roberta/__init__.py, both halves do not define the same objects. Differences for flax backend: FLAX_XLM_ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST in TYPE_HINT but not in _import_structure. make: *** [Makefile:41: repo-consistency] Error 1 ``` I don't know where the variable `TYPE_HINT` is, it doesn't seem to be anywhere in the entire repo apart from this error message.<|||||>Ah never mind, I found them. Thanks!<|||||>I'm confused by the error that's showing now - the doc builder can't find `TFXLMRobertaForCausalLM.forward` , which I don't think exists because it's tensorflow...<|||||>Finally, no errors πŸ˜† <|||||>Thanks so much for all the help!
transformers
21,046
closed
Omatkasvot
### Model description omien kasvojen treenausta ### Open source status - [ ] The model implementation is available - [ ] The model weights are available ### Provide useful links for the implementation _No response_
01-07-2023 20:00:54
01-07-2023 20:00:54
transformers
21,045
closed
VisualBertTokenizer
### Feature request VisualBert takes 2 main inputs: tokenized text and tokenized images. The text tokenization can already be handled by the BertTokenizer, but the visual tokenization still has no support, and is no trivial task. This visual tokens are built with embeddings derived from a set of regions, each one corresponding to the region of a detected object of the image from an object detector. Here's a more detailed description of those embeddings from the [paper](https://arxiv.org/pdf/1908.03557.pdf): Each embedding in F is computed by summing three embeddings: f_o, a visual feature representation of the bounding region of f, computed by a convolutional neural network. f_s, a segment embedding indicates it is an image embedding as opposed to a text embedding. f_p, a position embedding, which is used when alignments between words and bounding regions are provided as part of the input, and set to the sum of the position embeddings corresponding to the aligned words. As a tip, remember that some VisualBert checkpoints handle different visual embedding dimensions. You can use the [examples from the model docs](https://github.com/huggingface/transformers/blob/main/docs/source/en/model_doc/visual_bert.mdx) as a guide. Also note that, given that the embedding depends of an object detector, this should be an explicit parameter of the visual tokenizer since different detectors will perform differently. ### Motivation Building a visual embedding is conceptually simple, but implementing it is a tedious task, and there is no standard way to handle this directly with Transformers. ### Your contribution This issue arised while building the ``` DummyVisualBertInputGenerator ``` as a requisite for exporting the model to ONNX in Optimum. This is still in progress.
01-07-2023 16:46:01
01-07-2023 16:46:01
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,044
closed
Add `min_new_tokens` argument in generate() (implementation based on `MinNewTokensLengthLogitsProcessor`)
# What does this PR do? Fixes #20756 #20814 #20614 (cc @gonced8 @kotikkonstantin) As many said, it is better to add an argument `min_new_tokens` to the `.generate()` method to limit the length of newly generated tokens. The current parameter `min_length` limits the length of `prompt + newly generated tokens`, not the length of `newly generated tokens`. I closed my old PR #20819 and implement this feature based on `MinNewTokensLengthLogitsProcessor` (see #20892) as suggested by @gante . <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - @gante <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-07-2023 09:52:38
01-07-2023 09:52:38
_The documentation is not available anymore as the PR was closed or merged._<|||||>(@sgugger ready to merge if you agree. For context: this PR makes the `MinNewTokensLengthLogitsProcessor` usable from `.generate`, if the user passes `min_new_tokens` in the generate config or as an argument)
transformers
21,043
closed
ConvNeXT V2
### Model description Short Description Just released - ConvNeXt with a new internal layer. In this paper, Authors propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. # Contribution ## I would like to work on this! ### Open source status - [X] The model implementation is available - [X] The model weights are available ### Provide useful links for the implementation Papers https://arxiv.org/abs/2301.00808 Official Implementations https://github.com/facebookresearch/ConvNeXt-V2 @NielsRogge @alaradirik
01-07-2023 06:31:29
01-07-2023 06:31:29
Hi @IMvision12! Thanks for taking this on, I think ConvNeXT V2 would be a great addition to transformers. If you have any questions about the internal logic of the library or run into issues, you can ping me or @NielsRogge anytime. We can also create a Slack channel and continue the collaboration on the PR over there if you'd like. <|||||>@IMvision12 I sent the invite, looking forward to adding ConvNeXT V2 to transformers!<|||||>Hello, I'd like to work on this issue. How do I get started?<|||||>Hi @asrimanth, I took over the PR and I'm almost done with it. Feel free to look at other open issues though!
transformers
21,042
closed
fix typo
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> typo fix (dictionnary -> dictionary) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-07-2023 06:04:54
01-07-2023 06:04:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,041
closed
Add Tri-Stage Scheduler, proposed in SpecAugment
### Feature request paper: [SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition](https://arxiv.org/abs/1904.08779) code: [Fairseq Tri-Stage Scheudler](https://github.com/facebookresearch/fairseq/blob/main/fairseq/optim/lr_scheduler/tri_stage_lr_scheduler.py) i want to add Tri-Stage Scheduler in huggingface ### Motivation i have two motivation - first, many ASR model using tri-stage scheduler on training, typically wav2vec2 in this case - second, when i making the model, use tri-stage scheduler, so i thought it'd be better to post it while I'm making it. ### Your contribution maybe it's need to modify optimization.py, trainer.py, trainingargument.py codes ```python import math from typing import Tuple, Union from torch.optim import Optimizer from torch.optim.lr_scheduler import LambdaLR # [NOTE]: copied from https://github.com/facebookresearch/fairseq/blob/main/fairseq/optim/lr_scheduler/tri_stage_lr_scheduler.py def get_tri_stage_scheduler_with_warmup( optimizer: Optimizer, num_training_steps: int, final_lr: float, num_warmup_steps: Union[int, float], num_hold_steps: Union[int, float], num_decay_steps: Union[int, float], last_epoch: int = -1, ) -> LambdaLR: default_lr = optimizer.defaults["lr"] check_warmup_type = isinstance(num_warmup_steps, int) warmup_steps = num_warmup_steps if check_warmup_type else num_warmup_steps * num_training_steps check_hold_type = isinstance(num_hold_steps, int) hold_steps = num_hold_steps if check_hold_type else num_hold_steps * num_training_steps check_decay_type = isinstance(num_decay_steps, int) decay_steps = num_decay_steps if check_decay_type else num_decay_steps * num_training_steps if not (warmup_steps + hold_steps + decay_steps) <= num_training_steps: raise ValueError("must don't exceed max_steps. but lr steps exceed max_step, please setting again") warmup_factpr = default_lr / warmup_steps decay_factor = -math.log(final_lr) / decay_steps def _decide_stage(step: int) -> Tuple[int, int]: # [NOTE]: warmup(rampup) stage if step < warmup_steps: return ("warm", step) offset = warmup_steps # [NOTE]: hold stage if step < offset + hold_steps: return ("hold", step - offset) offset += hold_steps # [NOTE]: decay stage if step <= offset + decay_steps: return ("decay", step - offset) # [NOTE]: over stage return "over", step - offset def lr_lambda(current_step: int) -> float: stage, step = _decide_stage(current_step) if "warm" == stage: compensator = (current_step if current_step else 1) * default_lr learning_rate = (warmup_factpr * step) + compensator elif "hold" == stage: compensator = default_lr learning_rate = default_lr**compensator elif "decay" == stage: compensator = default_lr learning_rate = (default_lr**compensator) * math.exp(-decay_factor * step) elif "over" == stage: learning_rate = final_lr return learning_rate return LambdaLR(optimizer, lr_lambda, last_epoch) ``` ## testing & result I compared the fairseq tri-stage on the link with the tri-stage I made using matplotlib. And I compared it with linear scheduler's warmup. ### maked tri-stage ![maked_tri_stage](https://user-images.githubusercontent.com/93233241/211130497-27a34222-ec4c-4f1e-9bd3-65067afb6493.png) ### fairseq tri-stage ![fairseq_tri_stage](https://user-images.githubusercontent.com/93233241/211130502-978d43d7-35d5-420e-9223-e08d0d84dfd0.png) ### linear tri stage ![linear](https://user-images.githubusercontent.com/93233241/211130508-0521697b-dd2d-4570-ae8b-c958a34d54d3.png) ### all gather ![all_gather](https://user-images.githubusercontent.com/93233241/211130509-80c5107a-2827-4471-8caa-3331cd5a2e5b.png) it's worked well! If you zoom in on 0.00025 hold steps, you can also see green! You don't have to read it after this. ## why use compensator? actually this tri-stage have float mismatch, if you compare output lr of this tri-stage and fairseq tri-stage, values is different when you test. it's because LamdbaLR, LamdbaLR's a part that multiplies the value from the scheduler lr by default_lr. as a result output lr differentt fairseq tri-stage. so i used compensator for solved this issue Since it's mathematically corrected, there could be some errors. - average error of 9.361272723949663e-08 at warmup stage. - average error of -4.569185665208767e-08 at hold stage. - average error of -4.164950459092579e-08 at decay stage.
01-07-2023 05:33:32
01-07-2023 05:33:32
Note that we won't accept new optimizer/scheduler in the Transformers library as the main goal of Transformers is models :-) You can add the scheduler directly to an example however!<|||||>ok! thank you for your reply!
transformers
21,040
closed
pytorch-pretrained-bert and transformers give different results
### System Info ```shell - `transformers` version: 4.25.1 - Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31 - Python version: 3.9.5 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction from pytorch_pretrained_bert import BertTokenizer, BertModel import torch torch.manual_seed(0) tokenizer = BertTokenizer.from_pretrained('bert-large-cased', do_lower_case=False) model = BertModel.from_pretrained('bert-large-cased') model.eval() model.to(device) text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = tokenizer.tokenize(text) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) segments_ids = [1 for x in tokenized_text] tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) tokens_tensor = tokens_tensor.to('cuda') segments_tensors = segments_tensors.to('cuda') with torch.no_grad(): outputs = model(tokens_tensor, segments_tensors) #%% import torch from transformers import BertTokenizer, BertModel, BertForMaskedLM torch.manual_seed(0) tokenizer = BertTokenizer.from_pretrained('bert-large-cased', do_lower_case=False) model = BertModel.from_pretrained('bert-large-cased', output_hidden_states=True) model.eval() model.to(device) text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]" tokenized_text = tokenizer.tokenize(text) indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) segments_ids = [1 for x in tokenized_text] tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) tokens_tensor = tokens_tensor.to('cuda') segments_tensors = segments_tensors.to('cuda') with torch.no_grad(): outputs = model(tokens_tensor, segments_tensors) ### Expected behavior ```shell The outputs from the 24 encoding layers should be identical for transformer and pytorch_pretrained_bert ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
01-06-2023 20:35:27
01-06-2023 20:35:27
Hi, @alicialitrtwe In Huggingface the model is loaded from [here](https://huggingface.co/bert-large-cased) which as the description says has 336M paramaters This model has the following configuration:(taken from [here](https://huggingface.co/bert-large-cased)) * 24-layer * 1024 hidden dimension * 16 attention heads * 336M parameters. But in https://github.com/Meelfy/pytorch_pretrained_BERT, the model has 340M parameters as the description says [here](https://github.com/Meelfy/pytorch_pretrained_BERT#doc) * bert-large-cased: 24-layer, 1024-hidden, 16-heads, 340M parameters So, I believe you are getting different results depending on different implementations. Actually in the `bert-large-cased` model card in huggingface there is a disclaimer suggesting this same problem, it says, "Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team." . You can read more about it [here](https://huggingface.co/bert-large-cased). I hope it solves your question, Thanks, susnato. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,039
closed
low_cpu_mem_usage raises KeyError with modified GPT2 model
### System Info ``` - `transformers` version: 4.25.1 - Platform: Linux-5.4.0-135-generic-x86_64-with-glibc2.10 - Python version: 3.8.13 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Not yet - Using distributed or parallel set-up in script?: Not yet ``` ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to test GPT2 models with different layer numbers, head numbers, and head sizes. The following code works with no errors. And the model is loaded successfully into the CPU with random weights, which is expected. ``` import torch from transformers import AutoModelForCausalLM, AutoConfig if __name__ == "__main__": model_id = "gpt2" model_config = AutoConfig.from_pretrained(pretrained_model_name_or_path=model_id) model_config.n_layer = 48 model_config.n_head = 25 model_config.n_embd = 1600 model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_id, config=model_config, ignore_mismatched_sizes=True, torch_dtype=torch.float16) ``` However, when I set the flag `low_cpu_mem_usage=True` in `from_pretrained()` like this: ``` import torch from transformers import AutoModelForCausalLM, AutoConfig if __name__ == "__main__": model_id = "gpt2" model_config = AutoConfig.from_pretrained(pretrained_model_name_or_path=model_id) model_config.n_layer = 48 model_config.n_head = 25 model_config.n_embd = 1600 model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_id, config=model_config, ignore_mismatched_sizes=True, torch_dtype=torch.float16, low_cpu_mem_usage=True) ``` I get below errors: ``` /opt/conda/lib/python3.8/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.5) warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of " Traceback (most recent call last): File "tmp.py", line 11, in <module> model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_id, File "/home/wenhant/.local/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 463, in from_pretrained return model_class.from_pretrained( File "/home/wenhant/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2379, in from_pretrained ) = cls._load_pretrained_model( File "/home/wenhant/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 2512, in _load_pretrained_model param = model_state_dict[key] KeyError: 'h.45.attn.c_proj.bias' ``` ### Expected behavior I expect my code to run with no errors doesn't matter if I set `low_cpu_mem_usage` to `True` or `False`.
01-06-2023 20:28:57
01-06-2023 20:28:57
Hi, @Wenhan-Tan I have made a PR regarding this issue, you can checkout the branch `fix_low_cpu_mem_usage` from my repository ([here](https://github.com/susnato/transformers/tree/fix_low_cpu_mem_usage)) and check if it solves your issue or not until the mods take any action on my PR or maybe merge it. Thanks, susnato.<|||||>Hi @susnato , Thank you! Your PR solves the issue! But I get another one when I use DeepSpeed inference afterwards. Not sure if they're related. Code is below: ``` import torch from transformers import AutoModelForCausalLM, AutoConfig import deepspeed if __name__ == "__main__": model_id = "gpt2" model_config = AutoConfig.from_pretrained(pretrained_model_name_or_path=model_id) model_config.n_layer = 48 model_config.n_head = 25 model_config.n_embd = 1600 model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_id, config=model_config, ignore_mismatched_sizes=True, torch_dtype=torch.float16, low_cpu_mem_usage=True) ds_config = { "tensor_parallel": {"tp_size": 1}, "dtype": "fp16", "replace_with_kernel_inject": True, "replace_method": "auto", } ds_model = deepspeed.init_inference(model=model, config=ds_config) ``` I get errors below: ``` Traceback (most recent call last): File "tmp.py", line 23, in <module> ds_model = deepspeed.init_inference(model=model, config=ds_config) File "/home/wenhant/.local/lib/python3.8/site-packages/deepspeed/__init__.py", line 311, in init_inference engine = InferenceEngine(model, config=ds_inference_config) File "/home/wenhant/.local/lib/python3.8/site-packages/deepspeed/inference/engine.py", line 127, in __init__ self.module.to(device) File "/home/wenhant/.local/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1682, in to return super().to(*args, **kwargs) File "/home/wenhant/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 987, in to return self._apply(convert) File "/home/wenhant/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 639, in _apply module._apply(fn) File "/home/wenhant/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 639, in _apply module._apply(fn) File "/home/wenhant/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 662, in _apply param_applied = fn(param) File "/home/wenhant/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 985, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) NotImplementedError: Cannot copy out of meta tensor; no data! ``` This error won't occur if I don't use the flag `low_cpu_mem_usage=True`.
transformers
21,038
closed
Add: tensorflow example for image classification task guide
This PR addresses https://github.com/huggingface/transformers/issues/21037 It adds a Tensorflow example to the existing task guide on image classification. State of the PR: The example illustrates preprocessing in TF, training, and pushing to Hub. The code samples have been tested and they work/reproduce.
01-06-2023 20:05:58
01-06-2023 20:05:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you all for the reviews! I have added the final part - data augmentation (kudos to @sayakpaul for helping me troubleshoot the issues I was having). The example is now complete. Let me know if it looks good enough to be merged :) cc @amyeroberts @sgugger
transformers
21,037
closed
Add tensorflow example for the image classification task guide
For the same dataset and steps as in https://huggingface.co/docs/transformers/tasks/image_classification, add sample code for the TensorFlow part. This example can supplement the existing guide and can be helpful to those who choose TensorFlow over PyTorch and would like to use Transformers for image classification. Related PR: https://github.com/huggingface/transformers/pull/21038
01-06-2023 20:01:29
01-06-2023 20:01:29
transformers
21,036
closed
remove flax from `documentation_tests.txt`
# What does this PR do? #21009 added `src/transformers/generation/flax_utils.py` to `documentation_tests.txt`, but CI image doesn't have `jax/flax` installed. The whole doctest suite failed, and reported 0 failure. This PR removes this file from `documentation_tests.txt`. The CI image used is the same as the scheduled CI, and it's intended not to have `jax/flax` in this image. We can decide if to use a separate image though.
01-06-2023 16:54:24
01-06-2023 16:54:24
_The documentation is not available anymore as the PR was closed or merged._<|||||>Ah, this explains why the previous example had several issues πŸ˜… @ydshieh We don't plan to add FLAX to the doctests, correct?<|||||>@gante Not a no from me. I think it's good to make sure the examples work (this is in the range of maintenance mode πŸ˜„ ). But I would like to have a yes from @sgugger and @LysandreJik before working on it.<|||||>Not worth any work right now given the usage IMO. There is plenty to do with more priority :-)
transformers
21,035
closed
feature: update wandb callback to upload checkpoints
# What does this PR do? The PR updates the `WandbCallback` with the following changes: - Adds `on_save` method to upload model checkpoints as artifacts. - Changes the default value of environment variable `WANDB_WATCH` from `gradients` to `false`. This enables quicker training when defaults are used. The user can easily change this behavior by setting the env variable. - Changes the `WANDB_LOG_MODEL` variable from `bool` to `str` allowing for different settings to upload artifacts. - Modifies the class dostring to reflect the above changes. - Fixes broken link to wandb documentation - Changes the wandb `run_name` from `output_dir` to wandb auto generated name. this avoids duplication of run names in wandb workspace ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). ## Who can review? - trainer: @sgugger - documentation: @stevhliu ## Examples - Example [colab](https://colab.research.google.com/drive/17imujjBEL2cQL3odAJEbVvjR6zVsDRuH?usp=sharing) reflecting all the changes to the WandbCallback - Example Weights & Biases [workspace](https://wandb.ai/parambharat/hf_transformers?workspace=user-parambharat) with runs that show different settings. - Example Weights & Biases [Artifact](https://wandb.ai/parambharat/hf_transformers/artifacts/checkpoint/checkpoint-ajdcez6k/v4) created for checkpoints
01-06-2023 15:57:53
01-06-2023 15:57:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hey, @sgugger : Thanks for the quick review and suggestions. I've resolved all the issues. :hugs:<|||||>@parambharat There is a weird issue with the tests. Could you try refreshing your permissions as shown [here](https://support.circleci.com/hc/en-us/articles/360048210711-How-to-Refresh-User-Permissions-) and then pushing an empty commit?<|||||>Hey, @sgugger. I ran the fix that you suggested and all checks pass now.<|||||>Thank you @stevhliu. I've committed your recommendations.<|||||>Thanks again for your contribution!
transformers
21,034
closed
RAM Out-Of-Memory error with `run_mlm.py` when loading a 6Gb json dataset
### System Info - `transformers` version: 4.23.0.dev0 - Platform: Linux-4.18.0-305.65.1.el8_4.x86_64-x86_64-with-glibc2.28 - Python version: 3.9.13 - Huggingface_hub version: 0.9.1 - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes (DDP) ### Who can help? @sgugger because it might be an error with the Trainer ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When using the `run_mlm.py` script with almost no modifications using a Json dataset of ~6Gb, my job gets killed by SLURM. The stack trace looks like the following : ``` Traceback (most recent call last): File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/site-packages/torch/distributed/launch.py", line 193, in <module> main() File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/site-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/site-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/site-packages/torch/distributed/run.py", line 752, in run elastic_launch( File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 131, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/gpfswork/rech/rax/commun/miniconda3/envs/artificial_data/lib/python3.9/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ======================================================= src/training/run_mlm.py FAILED ------------------------------------------------------- Failures: [1]: time : 2023-01-06_11:50:42 host : r10i0n8-ib0 rank : 1 (local_rank: 1) exitcode : -9 (pid: 870220) error_file: <N/A> traceback : Signal 9 (SIGKILL) received by PID 870220 ------------------------------------------------------- Root Cause (first observed failure): [0]: time : 2023-01-06_11:50:42 host : r10i0n8-ib0 rank : 0 (local_rank: 0) exitcode : -9 (pid: 870219) error_file: <N/A> traceback : Signal 9 (SIGKILL) received by PID 870219 ======================================================= slurmstepd: error: Detected 14 oom-kill event(s) in StepId=275597.batch. Some of your processes may have been killed by the cgroup out-of-memory handler. ``` What's weird is that the dataset is not that large (unfortunately I can't share it), and it worked fine with other datasets of similar size (for instance using ~4Gb of OSCAR text datasets). It also worked fine with another Json dataset of the same type but 50 times smaller. I am putting the issue here because the last messages logged are the following : ``` [INFO|trainer.py:502] 2023-01-06 11:50:25,673 >> max_steps is given, it will override any value given in num_train_epochs [INFO|trainer.py:556] 2023-01-06 11:50:25,673 >> Using cuda_amp half precision backend [INFO|trainer.py:725] 2023-01-06 11:50:25,674 >> The following columns in the training set don't have a corresponding argument in `XLMRobertaForMaskedLM.forward` and have been ignored: special_tokens_mask. If special_tokens_mask are not expected by `XLMRobertaForMaskedLM.forward`, you can safely ignore this message. ``` which seems to indicate that it happens inside the Trainer method `_inner_training_loop`. For reference, here is the command I'm running : ```shell n_gpus=4 python -m torch.distributed.launch --nproc_per_node $n_gpus \ src/training/run_mlm.py \ --model_type xlm-roberta \ --config_overrides max_position_embeddings=512 \ --tokenizer_name tokenizers/br_tokenizer_30k \ --train_file ${TRAIN_FILE} \ --is_split_into_words \ --line_by_line \ --use_auth_token \ --validation_split_percentage 5 \ --max_eval_samples 5000 \ --max_seq_length 128 \ --eval_steps 500 \ --output_dir $OUTPUT_DIR \ --do_train \ --do_eval \ --load_best_model_at_end \ --metric_for_best_model "loss" \ --greater_is_better False \ --evaluation_strategy steps \ --per_device_train_batch_size $per_device_batch_size \ --per_device_eval_batch_size $per_device_batch_size \ --gradient_accumulation_steps $(( $per_device_total_batch_size / $per_device_batch_size )) \ --fp16 \ --learning_rate 2e-5 \ --weight_decay 1e-2 \ --max_steps 100_000 \ --warmup_steps 10_000 \ --logging_dir $TENSORBOARD_DIR \ --logging_steps 200 \ --save_strategy steps \ --save_steps 1000 \ --save_total_limit 2 \ --preprocessing_num_workers $(( 8 * $n_gpus )) \ --report_to tensorboard \ --seed 42 ``` Some of the modifications to `run_mlm.py` involve using pre-tokenized datasets instead of raw text datasets by using the option `--is_split_into_words`. Any idea why this happens or how to circumvent it ? ### Expected behavior I would expect the Trainer to start training without OOM failure, especially given the fact that `run_mlm.py` tokenizes and groups the sentences in the dataset without OOM issues.
01-06-2023 14:39:03
01-06-2023 14:39:03
Debugging seems to indicate that this OOM error happens when wrapping the model with DDP: https://github.com/huggingface/transformers/blob/48d4e147d824efab97637947709d5aa67c809b3d/src/transformers/trainer.py#L1446-L1451 However, I don't really see why the data size would impact this line...<|||||>I found out that this happens with 4 GPUs, but not with 2 or 1 GPUs, so my workaround at the moment is to train with only 2 GPUs which is slower but doable.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,033
closed
BertTokenizer not release gpu memory after del
### System Info transformers 4.20.1 tensorflow 2.9.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python from transformers import BertTokenizer, TFBertModel import torch max_seq_len = 2028 tokenizer = BertTokenizer.from_pretrained('klue/bert-base', truncation=True, max_seq_len=max_seq_len) del tokenizer torch.cuda.empty_cache() ``` ### Expected behavior In ubuntu20.04 with RTX3090, after del tokenizer, gpu memory is not released. how can i release gpu memory?
01-06-2023 06:45:12
01-06-2023 06:45:12
This code doesn't use any GPU memory as tokenizers don't even import `torch`.<|||||>@sgugger In my case, after run this code, gpu memory is fully occupied. Is your pytorch the gpu version correct?<|||||>This is because you import `TFBertModel` (I didn't catch it in your first code sample). This imports `tensorflow` which then takes all the GPU memory.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,032
closed
fix parameter name in docstring
# What does this PR do? fixes parameter name `return_tensor -> return_tensors` in docstring. Fixes potential confusion. I believe this is my biggest contribution yet 🀣 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
01-06-2023 03:00:08
01-06-2023 03:00:08
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,031
closed
[make repo-consistency] weird warning
This doesn't make sense: ``` You are using torch==1.13.0, but torch>=1.9.0 is required to use MCTCTModel. Please upgrade torch. ``` since: `1.13 > 1.9` - a wrong comparison function? full output: ``` $ make repo-consistency python utils/check_copies.py python utils/check_table.py python utils/check_dummies.py python utils/check_repo.py Checking all models are included. Checking all models are public. You are using torch==1.13.0, but torch>=1.9.0 is required to use MCTCTModel. Please upgrade torch. Checking all models are properly tested. Checking all objects are properly documented. [...] ``` @sgugger
01-05-2023 23:39:30
01-05-2023 23:39:30
super, thank you for fixing, Sylvain!
transformers
21,030
closed
[bnb optim] fixing test
**Note to reviewers: we are dealing with a slow test, so a green CI doesn't tell anything.** This work is a continuation of https://github.com/huggingface/transformers/pull/21019 where this test failure was first reported by @ydshieh This PR: - extends/improves the `run_trainer` wrapper which simplifies the bnb test - drops the percentage-based asserts as those are quite meaningless - since they don't measure the memory used by the optimizer but the whole memory - it replaces them with the actual calculated saved memory expectation since we know exactly what the saved memory should be for a particular model. it's `6*params` bytes but not for `nn.Embedding` which gets fp32 - so let's measure that carefully. https://github.com/huggingface/transformers/blob/35a7052b61579cfe8df1a059d4cd3359310ec2d1/src/transformers/trainer.py#L1042-L1050 - drops the peak gpu memory comparison since on its own it's totally meaningless, in my testing I get both optims produce the same peak memory - what we care about is the total gpu memory. - forces 1 gpu - so that the gpu memory usage is the same in all environments, to support 2+ gpus we need to have a different threshold for each - except this is totally unncessary - switches to MBs everywhere, so it's much easier to debug Now, the test should be very deterministic on any gpu/platform. I still gave a small margin for differences. You can read my notes in the test for the exact math. @ydshieh. please verify it works on the CI and we will then merge it. Thank you!
01-05-2023 23:01:44
01-05-2023 23:01:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>The new test now passes on CI runners! I will review the change and thank you @stas00 ❀️ !
transformers
21,029
closed
Fix CLIP pooling for textual inversion so that eos tokens are taken
### Feature request For textual inversion in diffusers, we are adding tokens that have a higher token id than the eos token. So when we get clip embeddings for textual inv tokens, we need to change the pooling so it gets the eos token and not the arg max token. ### Motivation This is an issue that should be fixed as the clip embeddings won't work once we add more tokens to the tokenizer. ### Your contribution I can make a pr for this. This is not an issue in the original implementation of the clip since they use pre-existing tokens in the embedding which has its own pro-cons.
01-05-2023 22:21:57
01-05-2023 22:21:57
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,028
closed
Refactor script to reduce complexity
# What does this PR do? This PR refactors some functions of the script `run_bart_dlm_flax.py`. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-05-2023 22:16:43
01-05-2023 22:16:43
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21028). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for your PR. We prefer the current style for examples as in general users have indicated they prefer to: - not have to look for intermediate functions but just read the code sequentially - prefer if return xxx else return yyy statements to the suggested changes in this PR.<|||||>Thanks for the feedback, I will keep it in mind :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,027
closed
[issues template] update deepspeed owners
add the right contact for deepspeed@accelerate
01-05-2023 21:27:46
01-05-2023 21:27:46
_The documentation is not available anymore as the PR was closed or merged._<|||||>That's an excellent idea, Sylvain. Added.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21027). All of your documentation changes will be reflected on that endpoint.
transformers
21,026
closed
Fix arguments passed to predict function in QA Seq2seq training script
# What does this PR do? I used the script for training Seq2seq QA and realized that `--do_predict` contains a bug - kwarg `outputs` should be an instance of class `EvalLoopOutput`, but NumPy array is passed instead. Together with extracting predictions in the body of the method `post_processing_function` (line 610 in run_seq2seq_qa.py) this results in an error every time you try to run tests with this script. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @karthikrangasai @sgugger
01-05-2023 21:22:12
01-05-2023 21:22:12
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,025
closed
Import error because no actual libraries, as far as I can tell.
### System Info Copy-and-paste the text below in your GitHub issue. - huggingface_hub version: 0.11.1 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.5 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: C:\Users\name\.huggingface\token - Has saved token ?: False - Configured git credential helpers: manager-core - FastAI: N/A - Tensorflow: 2.9.1 - Torch: 1.12.1+cu116 - Jinja2: 3.0.2 - Graphviz: N/A - Pydot: N/A Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.25.1 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.10.5 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.12.1+cu116 (True) - Tensorflow version (GPU?): 2.9.1 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: want to yes - Using distributed or parallel set-up in script?: maybe?... If my understanding of deepspeed is right than I think so. ### Who can help? @pacman100 @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction accelerate config Traceback (most recent call last): File "/home/user/anaconda3/bin/accelerate", line 5, in <module> from accelerate.commands.accelerate_cli import main File "/home/user/anaconda3/lib/python3.6/site-packages/accelerate/__init__.py", line 7, in <module> from .accelerator import Accelerator File "/home/user/anaconda3/lib/python3.6/site-packages/accelerate/accelerator.py", line 27, in <module> from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state File "/home/user/anaconda3/lib/python3.6/site-packages/accelerate/checkpointing.py", line 24, in <module> from .utils import ( File "/home/user/anaconda3/lib/python3.6/site-packages/accelerate/utils/__init__.py", line 101, in <module> from .megatron_lm import ( File "/home/user/anaconda3/lib/python3.6/site-packages/accelerate/utils/megatron_lm.py", line 32, in <module> from transformers.modeling_outputs import ( File "/home/user/anaconda3/lib/python3.6/site-packages/transformers/__init__.py", line 30, in <module> from . import dependency_versions_check File "/home/user/anaconda3/lib/python3.6/site-packages/transformers/dependency_versions_check.py", line 17, in <module> from .utils.versions import require_version, require_version_core File "/home/user/anaconda3/lib/python3.6/site-packages/transformers/utils/__init__.py", line 59, in <module> from .hub import ( File "/home/user/anaconda3/lib/python3.6/site-packages/transformers/utils/hub.py", line 32, in <module> from huggingface_hub import ( ImportError: cannot import name 'CommitOperationAdd' The transformers library has in the utils folder, __init__, in this file it has: from huggingface_hub import ( CommitOperationAdd, HfFolder, create_commit, create_repo, get_hf_file_metadata, hf_hub_download, hf_hub_url, whoami, ) however huggingface_hub doesn't have these files, so am I missing something or is that transformers needs to be updated? I have the latest version and the version from github locally for both huggingface_hub and transformers. Now that I'm looking at it, there seems to be another import mistake that'll be flagged once I get past this one: from huggingface_hub.utils import ( EntryNotFoundError, LocalEntryNotFoundError, RepositoryNotFoundError, RevisionNotFoundError, hf_raise_for_status, ) Would appreciate any assistance with this, and since Idk if this could be considered a Huggingface_hub error or a transformers error I'll post it on both. ### Expected behavior I guess I expect to run into the next bug until I've solved all the bugs and can use accelerate and deepspeed, (fingers crossed, knock on wood). For a non tongue and cheek answer I think that according to the code at least it should import from these different libraries and files that is described in the hub.py script, pulled from the huggingface_hub library. Obviously, I could be mistaken, but python isn't having it and I haven't be able to find where, what hub.py wants.
01-05-2023 20:32:31
01-05-2023 20:32:31
I'm not sure where you are looking but `huggingface_hub` init does have a `CommitOperationAdd` object [here](https://github.com/huggingface/huggingface_hub/blob/ccdfd33ede1500b364d3561ccd6d4b2cc76fe9b2/src/huggingface_hub/__init__.py#L106) and it's been there since 0.10.0 which is the minimal version of huggingface_hub required.<|||||>Is there a way to fix this easily or do I have to go through all of the modules and replace the ones that will try and call huggingface_hub with the modules it should be calling?<|||||>I have no idea what you mean here. The code you mention does not match what is actually in the libraries, so it looks like you should just update them to the latest versions and check that your Python environment is actually using those (and not some random older versions).<|||||>I guess I'm just a little confused because I have transformers version 4.25.1 and huggingface_hub version 0.11.1, I don't have any different versions whether it be Ubuntu or Windows. When I look on Github and go to src/transformers/utils there is a hub.py, and in it as far as I can see has the code that I have shown it wishes to import. When I go to the huggingface_hub __init__ it has the lines in a submodule function that tries to import it as strings. Trouble is, why is it going to the __init__, should it not just go to the modules to find the classes it is looking for? Or does __init__ actually serve more of a purpose for that code. After all I've never found Python likes going indirectly through several different messagers, say it calls CommitOperationAdd in hub.py. This goes to __init__ in huggingface_hub, which then goes to .hf_api, which calls from ._commit_api. I'd think it'd be easier for hub.py to just call from huggingface_hub import _commit_api. Unless what you're saying is huggingface has a library that doesn't match what is on Github or what you can pip install which in that case, may I have that library? I feel like it's more confusing to write out a paragraph about this than it is to just look at the code snippet I provided, however, I can show you the exact code starting from hub.py and maybe even a picture of where I found it located, and do each of these steps all the way to _commit_api. My environment does not want to go any farther until I figure out a way for it to import the class 'CommitOperationAdd'. I just don't know if I told it to grab directly from _commit_api if it'd break the entire program. I do appreciate you being patient with me and trying to help me figure this out though, don't get me wrong, and whenever you have suggested something I have looked into it. Like today I updated my Ubuntu, Pip, Transformers and Huggingface_hub, although besides Ubuntu everything was up to date. I may end up having to try and not use DeepSpeed because it just seems to have some pretty big bugs, maybe I'll try DeepSparse.<|||||>Hi @Shikamaru5, sorry not getting back to you before. `huggingface_hub` expose most of its top-level attributes in the `__init__.py` module (`create_commit`, `create_repo`, `CommitOperationAdd`,...). You can see the complete list [here](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/__init__.py#L310). This is by design so that users don't have to know in which submodule they can find the methods they need. It also guarantees some flexibility for us: as`huggingface_hub._commit_api` should not be imported directly by users (it is "private"), we can make changes to it without caring about backward compatibility as long as the top-level attributes are still in `huggingface_hub/__init__.py`. What can be confusing when reading the code is that we are doing "lazy-loading". When doing `from huggingface_hub import something`, you are only importing `something` and the modules needed to make it work. This speed-up initialization by a lot since it is very rare that you require all modules at once. If you are interested in reading more about it, please have a look at [this PR](https://github.com/huggingface/huggingface_hub/pull/874). Now, back to your problem. As @sgugger mentionned that can be caused by a broken environment with older versions of the libs. For example here it seems that you are using Python 3.10 (which is good) but the anaconda path seems to be referring to Python 3.6 (`"/home/user/anaconda3/lib/python3.6/site-packages/transformers/utils/hub.py"`). Could that be the cause of your issue? In any case, I guarantee you that once you have a clean environment both lines should load correctly in your Python interpreter: ```py from huggingface_hub import CommitOperationAdd from transformers.utils.hub import CommitOperationAdd ```<|||||>I'm doing this all locally but before, when my wsl was working, I had got it working by just directly importing from the places I needed to import from. You are correct about the python 3.6 because once I got past that issue it said python 3.6 was bad so I tried to get python fixed, broke all of it, uninstalled wsl and Ubuntu, and now for some reason after a few days of trying and even having someone who used to work on Ubuntu for wsl, on Twitter trying to help me fix it, I haven't been able to do it. My next thought is I've found a program called Colossal AI and I'm going to see if that'll work instead of trying DeepSpeed or DeepSparse. Thank you for taking the time to check out my issue and see how you can help it's really appreciated.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,024
closed
transformers/examples/tensorflow/tokenclassification: Error at prepare_tf_dataset() using demo code with default parameters.
### System Info - `transformers` version: 4.26.0.dev0 - Platform: Windows-10-10.0.19044-SP0 - Python version: 3.9.13 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.11.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @Rocketknight1 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. git clone https://github.com/huggingface/transformers 2. cd transformers 3. pip install . 4. cd examples\tensorflow\token-classification 5. pip install -r requirements.txt 6. python run_ner.py \ --model_name_or_path bert-base-uncased \ --dataset_name conll2003 \ --output_dir /tmp/test-ner ### Expected behavior Expected example to fine-tunes BERT on CoNLL-2003.
01-05-2023 18:51:07
01-05-2023 18:51:07
When I attempt to run demo I get following error: Traceback (most recent call last): File "D:\transformer\foo\lib\site-packages\transformers\tokenization_utils_base.py", line 717, in convert_to_tensors tensor = as_tensor(value) ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (2,) + inhomogeneous part. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "D:\transformer\transformers\examples\tensorflow\token-classification\run_ner.py", line 592, in <module> main() File "D:\transformer\transformers\examples\tensorflow\token-classification\run_ner.py", line 415, in main tf_train_dataset = model.prepare_tf_dataset( File "D:\transformer\foo\lib\site-packages\transformers\modeling_tf_utils.py", line 1384, in prepare_tf_dataset tf_dataset = dataset.to_tf_dataset( File "D:\transformer\foo\lib\site-packages\datasets\arrow_dataset.py", line 405, in to_tf_dataset output_signature, columns_to_np_types = dataset._get_output_signature( File "D:\transformer\foo\lib\site-packages\datasets\arrow_dataset.py", line 258, in _get_output_signature test_batch = collate_fn(test_batch, **collate_fn_args) File "D:\transformer\foo\lib\site-packages\transformers\data\data_collator.py", line 43, in __call__ return self.tf_call(features) File "D:\transformer\foo\lib\site-packages\transformers\data\data_collator.py", line 347, in tf_call batch = self.tokenizer.pad( File "D:\transformer\foo\lib\site-packages\transformers\tokenization_utils_base.py", line 3017, in pad return BatchEncoding(batch_outputs, tensor_type=return_tensors) File "D:\transformer\foo\lib\site-packages\transformers\tokenization_utils_base.py", line 210, in __init__ self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis) File "D:\transformer\foo\lib\site-packages\transformers\tokenization_utils_base.py", line 733, in convert_to_tensors raise ValueError( ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. Perhaps your features (`labels` in this case) have excessive nesting (inputs type `list` where type `int` is expected). My problem seems to be with prepare_tf_dataset() when run locally in python script. I have noticed that when I try notebooks/examples/token_classification-tf.ipynb in Google Colab everything works fine. <|||||>@adaml-iri **How to solve the issue :** Add "--pad_to_max_length True" as an argument, so to start training you need to write, `python run_ner.py --model_name_or_path bert-base-uncased --dataset_name conll2003 --output_dir /tmp/test-ner --pad_to_max_length True` **Why is it happening ?** It's due to shape mismatch in the training sample's labels.(In line 717 in https://github.com/huggingface/transformers/blob/main/src/transformers/tokenization_utils_base.py) where the code is trying to convert the labels to numpy array using `np.asarray` but all the examples doesn't have labels with same shape so it's happening. **Here is the output you might see if this is resolved :** ... All model checkpoint layers were used when initializing TFBertForTokenClassification. Some layers of TFBertForTokenClassification were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['classifier'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. You're using a BertTokenizerFast tokenizer. Please note that with a fast tokenizer, using the `__call__` method is faster than using a method to encode the text followed by a call to the `pad` method to get a padded encoding. No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! To disable this behaviour please pass a loss argument, or explicitly pass `loss=None` if you do not want your model to compute a loss. ***** Running training ***** Num examples = 14041 Num Epochs = 3.0 Instantaneous batch size per device = 8 Total train batch size = 8 2023-01-07 00:49:52.596921: W tensorflow/core/framework/dataset.cc:769] Input of GeneratorDatasetOp::Dataset will not be optimized because the dataset does not implement the AsGraphDefInternal() method needed to apply optimizations. Epoch 1/3 110/1755 [>.............................] - ETA: 2:28:11 - loss: 0.3548 Let me know if you managed to resolve it or not, Thanks, susnato.<|||||>Thank you for your quick response. Everything works now. Much appreciated πŸ‘ <|||||>Hi @adaml-iri - sorry for the delay with dealing with this! I'm glad your issue got resolved, but when I run the code locally I don't get the same issue, and I don't think that example should require `pad_to_max_length` to be set to work. Can you try updating `datasets` as well with `pip install --upgrade datasets` and then checking if the issue persists?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,023
closed
Fix bigbird random attention
# What does this PR do? Fixes the bug mentioned in the [issue](https://github.com/huggingface/transformers/issues/17355) by transiting from `np.random` to the `jax.random`. It also adds several minor changes to be able to run the new code and pass the all the tests <!-- Remove if not applicable --> Fixes # (issue) https://github.com/huggingface/transformers/issues/17355 ## Before submitting - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. ## Who can review? @sanchit-gandhi @thevasudevgupta @patrickvonplaten
01-05-2023 18:06:39
01-05-2023 18:06:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @sanchit-gandhi! Thank you very much for this detailed review. It is really helpful since this is my first time working with JAX :). I will apply the changes during the weekend. Have a great day!<|||||>Awesome, very glad to hear that the pointers were helpful πŸ€— feel free to post here if you have any questions - it's a bit of a fiddly fix and I'm more than happy to help if you get stuck on anything! There's actually a similar rng trick that we use in Flax BEIT: https://github.com/huggingface/transformers/blob/b210c83a78022226ce48402cd67d8c8da7afbd8d/src/transformers/models/beit/modeling_flax_beit.py#L161 You can follow through the logic we employ with `"droppath"` and `droppath_rng` to see a working example of what we want to do here!<|||||>Hi @sanchit-gandhi! Sorry for the late response but lately I was in the process of changing workplaces as well as on vacation so I have not checked github for a while :). I have implemented your comments but I have two follow up questions: 1) Should I remove all `numpy` calls in the modeling file even the ones like `np.zeros` or `np.arange` or only the ones related to the randomness? 2) I have some problems with `indices_prng_key` for the scenario when `FlaxBigBirdBlockSparseAttention` is used but `deterministic=True` for which `indices_prng_key=None`. Since even though deterministic is set to False the random jax functions are still being called and in this case the provided `rng_key=None` which results in the error. <|||||>Hey @Bearnardd! Awesome to see that you've picked-up this PR again! 1. Yes please! If you could replace all NumPy calls with their JAX equivalents that would be grand! This will keep all tensors on the accelerator device (GPU/TPU) rather than pulling them back to the host 2. In this case, could we add `if/else` logic that returns the correct attention mask when deterministic? E.g. ```python if self.deterministic: # do the deterministic inference attention with no randomness else: # do the stochastic training attention with jnp randomness ``` A similar logic is used in the Flax dropout module: https://flax.readthedocs.io/en/latest/_modules/flax/linen/stochastic.html#Dropout<|||||>Hi @sanchit-gandhi! I have replaced all NumPy calls but frankly I am not sure if I understand the second part correctly. Could you explain what do you mean by `deterministic inference attention` and where that `if/else` logic should be places? <|||||>Hey @Bearnardd! Very cool! Do you mind pushing your changes to origin so that I can see the code? This might make it easier to help-out with the deterministic issue! Essentially, we only want to do the random operations when we're training, not at inference time. During inference, we want everything to be deterministic. This is like dropout - we only do this during training and not inference, when we want to disable dropout and have all the nodes be active. We can check if the model is deterministic through the attribute `self.determisitic` (like `self.training` in PyTorch). What we need to do is add some logic so that the random calls are only made _if_ `self.deterministic=False` (training): we know we're in training mode and we want all of the randomness, so we activate all the random calls. _Else_ `self.deterministic=True` (inference) and we're indeterministic, then we don't want to do any of the randomness, e.g. skip all of it.<|||||>Hi @sanchit-gandhi! Sure I will push the changes around Friday since I am currently at a business trip and I do not have my personal laptop :/<|||||>Hi @sanchit-gandhi! I have pushed the changes.<|||||>Hi @sanchit-gandhi all copied from statements are back, without one for PredictionHead since different dtype still counts are not copied and it results in the error<|||||>Hi @amyeroberts @sanchit-gandhi! I changed if checking to `deterministic` and added `unittestskip` for equivalence tests. Probably around weekend I will create a issue regarding bug in Pytorch's implementation as well as PR fix. Nevertheless I guess this PR is ready to be merged.<|||||>Yep - it all looks good to me. Thanks again for this contribution, @Bearnardd!
transformers
21,022
closed
[NumPy] Remove references to deprecated NumPy type aliases
This change replaces references to a number of deprecated NumPy type aliases (np.bool, np.int, np.float, np.complex, np.object, np.str) with their recommended replacement (bool, int, float, complex, object, str). NumPy 1.24 drops the deprecated aliases, so we must remove uses before updating NumPy. See huggingface/diffusers#1810 for a similar issue in diffusers. Co-authored-by: Peter Hawkins <[email protected]>
01-05-2023 14:19:44
01-05-2023 14:19:44
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I heard you might be the right person to review this. Please let me know if you have any questions πŸ€—
transformers
21,021
closed
`blip` support for training
# What does this PR do? Fixes: https://discuss.huggingface.co/t/finetune-blip-on-customer-dataset-20893/28446 Before this PR, it was not possible to fine-tune BLIP on a custom dataset due to various reasons, mainly because the code did not supported "on-the-fly" right shifting of `decoder_input_ids`. This PR also harmonizes some attributes inside `BlipForQuestionAnswering` --> I replaced `decoder_bos_token_id ` by `decoder_start_token_id` to make it consistent with T5 etc. For all VQA models we should (at train time): 1- make sure `labels` is not None 2- create `decoder_input_ids` based on those (make sure the padding is always on the right side) 3- Infer on the text decoder I feel that we should probably add more tests and create a `VisualQuestionAnsweringMixin` in a follow up PR to make sure this is done for all VQA models (as I'd expect more VQA models to be added this year) cc @NielsRogge @sgugger
01-05-2023 14:02:45
01-05-2023 14:02:45
Perfect, thanks for clarifying @sgugger ! <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@younesbelkada @sgugger hi, thanks for contributing this code, but I found two possible bugs: 1. the code shift `labels` to `decoder_input_id` ([here](https://github.com/huggingface/transformers/pull/21021/files#diff-e483643fc206cde147f2483924507d9a407db540b01bf4028c72b8ec6cc3ffabR1209)) and the code shift `labels` when computing loss [(here)](https://github.com/huggingface/transformers/pull/21021/files#diff-00846f08e1b2a41509f5f669a49fc36baac8555a83da24c30f8ac9e7a9024d59R900) should only keep one, and I prefer to keep the former one and delete the later. 2. The BERT tokenizer has added a start token before the sequence, and the `_shift_right` function will add another one (pad), so it should use `forced_bos_token_id` like BART for generation.<|||||>Moreover, I think the [`reduction`](https://github.com/huggingface/transformers/pull/21021/files#diff-e483643fc206cde147f2483924507d9a407db540b01bf4028c72b8ec6cc3ffabR1227) function of `CrossEntropyLoss` should be set to `'mean'`, or you will get a loss more than tens or hundreds, which is uncommon and may affect the optimization.<|||||>Thanks for your valuable comments @StevenTang1998! @younesbelkada in any case it would probably be best to have verified this branch in a notebook on a toy image captioning dataset. Making the code as similar as possible to our other generative models (like T5, BART or GPT-2) would be great.<|||||>Hi @younesbelkada, I encountered the same error as mentioned by @dxlong2000. I cloned this repository but the error is still there. ValueError: Expected input batch_size (0) to match target batch_size (29).<|||||>Hi @faiqff94 All the issues related to BLIP training should be resolved, if you follow what has been done in https://colab.research.google.com/drive/1lbqiSiA0sDF7JDWPeS0tccrM85LloVha?usp=sharing you should not get any issue. Can you share a reproducible handy script?
transformers
21,020
closed
Time series transformer: input projection and Std scaler
# What does this PR do? Add initial input projection layer and `d_model` hyperparam Added a StdScaler for time series transformer as well as corresponding features. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-05-2023 11:41:28
01-05-2023 11:41:28
_The documentation is not available anymore as the PR was closed or merged._<|||||>@NielsRogge I have one more fix here... I can make `static_real_features` be optional<|||||>@NielsRogge i believe this can be merged so i can then start to add these changes to the informer model... what do you think?
transformers
21,019
closed
Fix `test_run_seq2seq_bnb` CI
# What does this PR do? Fix `test_run_seq2seq_bnb` CI. See [failed job run](https://github.com/huggingface/transformers/actions/runs/3834635537/jobs/6527258225) It seems the expected reduced GPU memory usage only happens when running the test on multi GPU env. I simply add `require_torch_multi_gpu` without trying to understand why it fails with single GPU env. I can try to figure it out, but the probability that @stas00 knows the reason > 1.0, so I would like to see if he has any comment first. Error ``` tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_bnb (line 256) AssertionError: -0.0023021928988980356 not greater than 10 : should use very little peak gpu memory with BNB, compared to without itbut got gpu_peak_mem_orig=509447168 and gpu_peak_mem_bnb=510622720 ```
01-05-2023 11:21:18
01-05-2023 11:21:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>cc @manuelciosici<|||||>hmm, I didn't write this test and don't know why these numbers were set, so perhaps it's easier to ask the one who wrote it? If it's not possible please let me know I will be able to study it later, as I'm off to the airport now.<|||||>but yes, such tests should do the measurements per number of gpus usually. i.e. 1 gpu - measurement 1, 2 gpus - measurement 2, etc. the easiest fix is to ensure you run on exactly that many gpus always and then you need only one "truth" to measure against. <|||||>(we can investigate later, not urgent) @stas00 The first error occurred is that we expect the GPU memory usage will be larger when not using BNB (i.e. `gpu_peak_mem_orig `) than when using BNB (i.e. `gpu_peak_mem_bnb`). The test assert 10% difference. However, on our single GPU runner, ```bash gpu_peak_mem_orig=509447168 gpu_peak_mem_bnb=510622720 ``` which is quite weird (intuitively). - We can definitely adjust the values for different environment, but probably it's a good idea to understand what's going on here if possible. - We left comment in the original PR page, but haven't heard from the PR author. But I cc them in a comment in this PR. <|||||>some quick thoughts: - do we clear cuda cache there between the measurements? and first call `gc.collect()` (then cache clear) - using a larger model should make the difference (savings) more distinct and of course the test might be failing if bnb is broken - recent update? try earlier version?<|||||>I can reproduce the failure, looking<|||||>Fixed here https://github.com/huggingface/transformers/pull/21030 <|||||>Close in favor of #21030 21030
transformers
21,018
closed
Generate: post-generate config TF doctest fix
# What does this PR do? Same as in this PR https://github.com/huggingface/transformers/pull/20804, but for TF 🀦
01-05-2023 11:16:57
01-05-2023 11:16:57
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,017
closed
The generation input shape and the output shape from the official scripts are completely different for the TFLite model
### System Info - `transformers` version: 2.3.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.12.1+cu113 (True) - Tensorflow version (GPU?): 2.9.2 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no and - `transformers` version: 4.25.1 - Platform: Linux-5.10.133+-x86_64-with-glibc2.27 - Python version: 3.8.16 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): 1.13.0+cu116 (False) - Tensorflow version (GPU?): 2.9.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @patric @anton-l @sanchit-gandhi @Rocketknight1 ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Upon cloning the gpt2.py from the [link ](https://github.com/huggingface/tflite-android-transformers/tree/master/models_generation) renders input shape as [3 5] instead of [1 64] ``` import transformers import tensorflow print(transformers.__version__) print(tensorflow.__version__) ``` ``` 2.3.0 2.2.0 ``` ``` !wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-64.tflite import numpy as np import tensorflow as tf tflite_model_path = 'gpt2-64.tflite' # Load the TFLite model and allocate tensors interpreter = tf.lite.Interpreter(model_path=tflite_model_path) interpreter.allocate_tensors() # Get input and output tensors input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() input_shape = input_details[0]['shape'] #print the output input_data = np.array(np.random.random_sample((input_shape)), dtype=np.int32) interpreter.set_tensor(input_details[0]['index'], input_data) interpreter.invoke() output_data = interpreter.get_tensor(output_details[0]['index']) print(output_data.shape) print(input_shape) ``` ``` >(1, 64, 50257) >[ 1 64] ``` ``` import tensorflow as tf from transformers import TFGPT2LMHeadModel import numpy as np model = TFGPT2LMHeadModel.from_pretrained('gpt2') # or 'distilgpt2' input_spec = tf.TensorSpec([1, 64], tf.int32) model._set_inputs(input_spec, training=False) converter = tf.lite.TFLiteConverter.from_keras_model(model) # For FP16 quantization: # converter.optimizations = [tf.lite.Optimize.DEFAULT] # converter.target_spec.supported_types = [tf.float16] tflite_model = converter.convert() open("gpt2-64-2.tflite", "wb").write(tflite_model) tflite_model_path = 'gpt2-64-2.tflite' # Load the TFLite model and allocate tensors interpreter = tf.lite.Interpreter(model_path=tflite_model_path) interpreter.allocate_tensors() # Get input and output tensors input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() input_shape = input_details[0]['shape'] #print the output input_data = np.array(np.random.random_sample((input_shape)), dtype=np.int32) interpreter.set_tensor(input_details[0]['index'], input_data) interpreter.invoke() output_data = interpreter.get_tensor(output_details[0]['index']) print(output_data.shape) print(input_shape) ``` ``` >(3, 5, 50257) >[3 5] ``` **Expected >(1, 64, 50257) >[ 1 64]** ``` import transformers import tensorflow print(transformers.__version__) print(tensorflow.__version__) >4.25.1 > 2.9.2 ``` ``` !wget https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-64.tflite import numpy as np import tensorflow as tf tflite_model_path = 'gpt2-64.tflite' # Load the TFLite model and allocate tensors interpreter = tf.lite.Interpreter(model_path=tflite_model_path) interpreter.allocate_tensors() # Get input and output tensors input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() input_shape = input_details[0]['shape'] #print the output input_data = np.array(np.random.random_sample((input_shape)), dtype=np.int32) interpreter.set_tensor(input_details[0]['index'], input_data) interpreter.invoke() output_data = interpreter.get_tensor(output_details[0]['index']) print(output_data.shape) print(input_shape) ``` ``` >(1, 64, 50257) > [ 1 64] ``` ``` import tensorflow as tf from transformers import TFGPT2LMHeadModel import numpy as np model = TFGPT2LMHeadModel.from_pretrained('gpt2') # or 'distilgpt2' input_spec = tf.TensorSpec([1, 64], tf.int32) model._set_inputs(input_spec, training=False) converter = tf.lite.TFLiteConverter.from_keras_model(model) # For FP16 quantization: # converter.optimizations = [tf.lite.Optimize.DEFAULT] # converter.target_spec.supported_types = [tf.float16] tflite_model = converter.convert() open("gpt2-64-2.tflite", "wb").write(tflite_model) tflite_model_path = 'gpt2-64-2.tflite' # Load the TFLite model and allocate tensors interpreter = tf.lite.Interpreter(model_path=tflite_model_path) interpreter.allocate_tensors() # Get input and output tensors input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() input_shape = input_details[0]['shape'] #print the output input_data = np.array(np.random.random_sample((input_shape)), dtype=np.int32) interpreter.set_tensor(input_details[0]['index'], input_data) interpreter.invoke() output_data = interpreter.get_tensor(output_details[0]['index']) print(output_data.shape) print(input_shape) ``` ``` >(2, 1, 12, 1, 64) >[1 1] ``` **Expected >(1, 64, 50257) >[ 1 64]** How can we fix the same ? ### Expected behavior Expected inputshape is [ 1 64] and output shape is (1, 64, 50257)
01-05-2023 10:49:53
01-05-2023 10:49:53
cc @gante and @Rocketknight1 <|||||>@sgugger @gante @Rocketknight1 any update on the same ? I even tried the same with the collab below taking latest version of transformers tensorflow into consideration , I get the same issue as above. When we import the new model with different output shape onto the android project (gpt2) , I get the issue as below : ``` E/AndroidRuntime: FATAL EXCEPTION: main Process: co.huggingface.android_transformers.gpt2, PID: 17293 java.lang.IllegalArgumentException: ByteBuffer is not a valid flatbuffer model at org.tensorflow.lite.NativeInterpreterWrapper.createModelWithBuffer(Native Method) at org.tensorflow.lite.NativeInterpreterWrapper.<init>(NativeInterpreterWrapper.java:60) at org.tensorflow.lite.Interpreter.<init>(Interpreter.java:224) at co.huggingface.android_transformers.gpt2.ml.GPT2Client$loadModel$2.invokeSuspend(GPT2Client.kt:138) at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33) at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:241) at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:594) at kotlinx.coroutines.scheduling.CoroutineScheduler.access$runSafely(CoroutineScheduler.kt:60) at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:740) ``` https://gist.github.com/sudo-carson/158d9b9e7208e42977b08d966f3f4989 <|||||>Hello @sgugger @gante @Rocketknight1 The issue has been fixed, the below code can be used. The output should be keras_output.logits as in the code below ``` model = TFGPT2LMHeadModel.from_pretrained('gpt2') # or 'distilgpt2' input = tf.keras.Input([ 64 ], batch_size=1, dtype=tf.int32) keras_output = model(input, training=False) model = tf.keras.Model(input, keras_output.logits) converter = tf.lite.TFLiteConverter.from_keras_model(model) # For FP16 quantization: # converter.optimizations = [tf.lite.Optimize.DEFAULT] # converter.target_spec.supported_types = [tf.float16] tflite_model = converter.convert() open("model.tflite", "wb").write(tflite_model) ```<|||||>Hey @generic-matrix πŸ‘‹ Thank you for raising the issue and for reporting its fix as well <3 For context, we haven't been checking whether our models are supported by TFLite. That's something we plan to rectify over this year, with notebooks and demos as well! (closing as it is fixed)
transformers
21,016
closed
VideoMAE missing CLS tokens in embedding
### System Info I'm not sure if I've missed something in the code, but I can't seem to find where the CLS tokens are added? I have input data of shape (64,45,2,32,32) with tubelet size = 5, patch_size = 4. This results in a sequence length of 576. From my understanding that is the total number of tubelets. I see that after the data is passed through the embedding layer the final embedding shape is (64,576,768) where 768 is the hidden size. However, should the dimensions not be (64,577,768) since we should be adding a CLS token to the sequence? Would be great to get hear back soon because I'm not sure if I'm wrong or if there is something wrong with the code. Thanks! @NielsRogge ### Reproduction pixel_values = torch.randn(1,45, 2, 32, 32) config = VideoMAEConfig() config.num_frames = 45 config.image_size = 32 config.patch_size = 4 config.tubelet_size = 5 config.num_channels = 2 num_patches_per_frame = (model.config.image_size // model.config.patch_size) ** 2 seq_length = (model.config.num_frames // model.config.tubelet_size) * num_patches_per_frame print(seq_length.shape) videomae = VideoMAEModel(config) output = videomae(pixel_values, output_hidden_states=True) sequence_output = output[0] print(sequence_output.shape) ### Expected behavior seq_length = 576 sequence_output = (1,577,768) The embedding sequence length should be total number of tubelets + 1
01-05-2023 09:37:03
01-05-2023 09:37:03
Hi, VideoMAE doesn't use a CLS token, so this can be fixed in the docstring. The number of tokens sent through the Transformer equals (number of frames // tubelet_size) * (height // patch_size) * (width // patch_size). For video classification, the authors average pool the final hidden states of the tokens before applying a final classification head. Do you mind opening a PR to fix [this docstring](https://github.com/huggingface/transformers/blob/8fb4d0e4b46282d96386c229b9fb18bf7c80c25a/src/transformers/models/videomae/modeling_videomae.py#L901-L902)?
transformers
21,015
closed
Domain-specific word similarity from documents question-answer
I am trying to create a chat boat-like application (inspired by chat GPT). The boat or you can say an application should be able to answer questions of a software/products respective Help document. I have tried to finetune tilbert_base_uncased model from hugging face on less than 100 annotated Question-answer in the form of squad format. but my model is not performing well. the F1 score is about 0.3. Can anyone suggest important approaches or docs related to Question answering-based QA implementation who worked on the same problem?
01-05-2023 06:52:45
01-05-2023 06:52:45
Please use the [forums](https://discuss.huggingface.co/) to ask such questions as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,014
closed
Update task summary part 1
This PR reworks the task summary to be more conceptual and provides more explanation about a topic to help users better understand it. It'll be focused more on understanding instead of practical steps. The update will be split into two parts: 1. Describe the tasks πŸ€— Transformers is capable of solving (the focus of this PR). Provide some context about how these tasks used to be solved, how they're handled now, and practical applications of each task. 2. Explain how πŸ€— Transformers solve these tasks. This'll be a more conceptually advanced and separate page.
01-05-2023 01:17:50
01-05-2023 01:17:50
_The documentation is not available anymore as the PR was closed or merged._<|||||>> It is called "what Transformers can do", so I think the introductions of each modality should just explain what each modality is and abstain from comparing models Thanks for the feedback, this helped me refine the scope of this page! It was a little difficult trying to discuss the tasks and not be tempted to also talk about the models since the two are so closely related. πŸ˜… I updated the intro of each modality with an explanation of the input data and how to get it into a useable format by the model to solve a task.
transformers
21,013
closed
HF models use deprecated pytorch function invocations
Masks are frequently created in `uint8` type, see e.g. here https://github.com/huggingface/transformers/blob/8fb4d0e4b46282d96386c229b9fb18bf7c80c25a/src/transformers/models/gpt_neox_japanese/modeling_gpt_neox_japanese.py#L186 or here https://github.com/huggingface/transformers/blame/52dd2b61bff8af5b6409fdd5ec92a9b3114f3636/src/transformers/models/codegen/modeling_codegen.py#L101, and then used in `torch.where`. Use of `uint8` masks in `torch.where` has been deprecated for couple years, and though it still works in pytorch eager (with a warning), support for this has been removed in `torch.compile`. It would be good to audit places where uint8 masks are used and replace them with bool masks.
01-04-2023 22:10:27
01-04-2023 22:10:27
Thanks a lot for flagging @ngimel ! cc @ArthurZucker and @ydshieh <|||||>Could use of `torch.where` be the cause of these `torch.compile()` errors in `AutoModelForSeq2SeqLM` models? ```text File "/home/kastanday/utils/mambaforge3/envs/torch2.0/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 949, in <graph break in forward> raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds") ValueError: You have to specify either input_ids or inputs_embeds ``` Unsupported `ConstantVariable(str)`: ``` File "/home/kastanday/utils/mambaforge3/envs/torch2.0/lib/python3.10/site-packages/torch/_dynamo/exc.py", line 71, in unimplemented raise Unsupported(msg) torch._dynamo.exc.Unsupported: call_function BuiltinVariable(ValueError) [ConstantVariable(str)] {} ``` Minimal reproduction: ```python import torch from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model_name = "google/flan-t5-base" model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) model = torch.compile(model) # PyTorch 2.0 from torch import _dynamo _dynamo.config.verbose = True _dynamo.explain(model) ``` Thanks for any guidance. <|||||>Should be fixed on monday the PR is ready πŸ˜‰
transformers
21,012
closed
Add document token classification pipeline (#1)
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds Pipeline for Document Token Classification. Code is mostly based on PR for Document Question Answering. https://github.com/huggingface/transformers/pull/18414 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ @Narsil If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 --> @Narsil
01-04-2023 21:31:22
01-04-2023 21:31:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21012). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @vaishak2future Did you know that layoutlm already implements `object-detection` : https://huggingface.co/Narsil/layoutlmv3-finetuned-funsd This might be close enough to this, no ?<|||||>@Narsil , thank you for looking at the PR. While Object Detection does solve this particular instance of the problem, we see Document Token Classification as a multimodal task separate from the unimodal task of Object Detection. Document Token Classification requires two modalities - an image and a set of tokens. This gives control to the user to use their OCR of choice (especially for languages that are not well handled by Tesseract), but also to choose their own tokens that might not be text on the image itself. <|||||>@Narsil All checks are now passing. Could you please review? Thanks.<|||||>Hi @vaishak2future , I understand the ideas to remove the Tesseract where needed. For the extra tokens, where you imagining extracting tokens from PDF directly maybe ? (This was also an idea behind `document-question-answering` where the idea is that we could always fuse the pipeline later with regular `visual-question-answering`). Here there are a few things that make me hesitant: - Pipelines are made to be usable by non ML programmers, here, it's kind of tricky since tokens and boxes and such are quite ML involved - Pipelines are made to be relatively generic over different model types, here only layoutlm would work as-is. The idea is to keep the number of pipelines relatively small, so discoverable by users. That being said, enabling power users like your use case should be supported IMO. I would have to look at how to implement within `object-detection`. But I don't see any issue with adding extra parameters for such niche, but extremely useful use-cases. For instance `asr` pipeline enables users to send the raw audio frames directly which IMO is seemingly the same idea (bypass or modify very specifically some preprocessing which would be the OCR in your case) What do you think ? Pinging @sgugger @LysandreJik for other opinions on this. Regardless, I briefly looked at the PR, the code seems good, there are a few nits regarding how tests are structured and how many different inputs are accepted, but overall it looks quite good. I'll delay my comments after we reach a decision on this as there's no big structural blockers on my end imo.<|||||>This looks very specific to one model. We can't host all possible pipelines in Transformers, so in such a case, we should rely on the code on the Hub for pipeline feature. You can see pointers [here](https://huggingface.co/docs/transformers/add_new_pipeline#share-your-pipeline-on-the-hub).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,011
closed
Bump gitpython from 3.0.2 to 3.1.30 in /examples/research_projects/distillation
[//]: # (dependabot-start) ⚠️ **Dependabot is rebasing this PR** ⚠️ Rebasing might not happen immediately, so don't worry if this takes some time. Note: if you make any changes to this PR yourself, they will take precedence over the rebase. --- [//]: # (dependabot-end) Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.0.2 to 3.1.30. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p> <blockquote> <h2>v3.1.30 - with important security fixes</h2> <p>See <a href="https://github-redirect.dependabot.com/gitpython-developers/GitPython/issues/1515">gitpython-developers/GitPython#1515</a> for details.</p> <h2>3.1.20</h2> <p>No release notes provided.</p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/gitpython-developers/GitPython/commit/141cd651e459bff8919798b3ccf03dfa167757f6"><code>141cd65</code></a> adjust changelog prior to release</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/678a8fe08dd466fcfe8676294b52887955138960"><code>678a8fe</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/gitpython-developers/GitPython/issues/1521">#1521</a> from stsewd/block-insecure-options</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/ae6a6e4b088a35c0fc7b17940722c8a515f7bee7"><code>ae6a6e4</code></a> Fix type hint on create_tag</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/5bce9b4f7fc825d8bcd450325e6dda78c49f0ca0"><code>5bce9b4</code></a> Document PushInfoList</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/f4f2658d5d308b3fb9162e50cd4c7b346e7a0a47"><code>f4f2658</code></a> Updates from review</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/9dc43926207b2205d77511c6ffd40944199f0c2d"><code>9dc4392</code></a> Submodule tests</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/c8ae33b9314a7d3716827b5cb705a3cd0a2e4a46"><code>c8ae33b</code></a> More tests</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/b92f01a3a38fc8e171d08575c69de9733811faa6"><code>b92f01a</code></a> Update/add tests for Repo.clone*</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/fd2c6da5f82009398d241dc07603fbcd490ced29"><code>fd2c6da</code></a> Updates from review</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/e6108c7997f5c8f7361b982959518e982b973230"><code>e6108c7</code></a> Block unsafe options and protocols by default</li> <li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.0.2...3.1.30">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=gitpython&package-manager=pip&previous-version=3.0.2&new-version=3.1.30)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
01-04-2023 20:29:11
01-04-2023 20:29:11
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21011). All of your documentation changes will be reflected on that endpoint.
transformers
21,010
closed
Bump gitpython from 3.1.18 to 3.1.30 in /examples/research_projects/decision_transformer
Bumps [gitpython](https://github.com/gitpython-developers/GitPython) from 3.1.18 to 3.1.30. <details> <summary>Release notes</summary> <p><em>Sourced from <a href="https://github.com/gitpython-developers/GitPython/releases">gitpython's releases</a>.</em></p> <blockquote> <h2>v3.1.30 - with important security fixes</h2> <p>See <a href="https://github-redirect.dependabot.com/gitpython-developers/GitPython/issues/1515">gitpython-developers/GitPython#1515</a> for details.</p> <h2>3.1.20</h2> <p>No release notes provided.</p> </blockquote> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/gitpython-developers/GitPython/commit/141cd651e459bff8919798b3ccf03dfa167757f6"><code>141cd65</code></a> adjust changelog prior to release</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/678a8fe08dd466fcfe8676294b52887955138960"><code>678a8fe</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/gitpython-developers/GitPython/issues/1521">#1521</a> from stsewd/block-insecure-options</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/ae6a6e4b088a35c0fc7b17940722c8a515f7bee7"><code>ae6a6e4</code></a> Fix type hint on create_tag</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/5bce9b4f7fc825d8bcd450325e6dda78c49f0ca0"><code>5bce9b4</code></a> Document PushInfoList</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/f4f2658d5d308b3fb9162e50cd4c7b346e7a0a47"><code>f4f2658</code></a> Updates from review</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/9dc43926207b2205d77511c6ffd40944199f0c2d"><code>9dc4392</code></a> Submodule tests</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/c8ae33b9314a7d3716827b5cb705a3cd0a2e4a46"><code>c8ae33b</code></a> More tests</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/b92f01a3a38fc8e171d08575c69de9733811faa6"><code>b92f01a</code></a> Update/add tests for Repo.clone*</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/fd2c6da5f82009398d241dc07603fbcd490ced29"><code>fd2c6da</code></a> Updates from review</li> <li><a href="https://github.com/gitpython-developers/GitPython/commit/e6108c7997f5c8f7361b982959518e982b973230"><code>e6108c7</code></a> Block unsafe options and protocols by default</li> <li>Additional commits viewable in <a href="https://github.com/gitpython-developers/GitPython/compare/3.1.18...3.1.30">compare view</a></li> </ul> </details> <br /> [![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=gitpython&package-manager=pip&previous-version=3.1.18&new-version=3.1.30)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) --- <details> <summary>Dependabot commands and options</summary> <br /> You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts). </details>
01-04-2023 20:29:06
01-04-2023 20:29:06
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_21010). All of your documentation changes will be reflected on that endpoint.
transformers
21,009
closed
Generate: FLAX infers pad token in its absence and has functional example
# What does this PR do? Some bug fixing in advance of #21007 (PR that adds generation config to Flax), to ensure we start from a functional flax generate codebase. In particular: 1. Flax now assumes the value `pad_token_id` when it is `None` and `eos_token_id` is not `None`, like TF and PT do. This is very helpful for open text generation examples, like with GPT2, was an open request (https://github.com/huggingface/transformers/issues/18884), and was one of the causes for failure in the existing example. This also includes the recent changes of #20727, where `eos_token_id` can be a list of tokens. 2. An `int32` type specification was missing in the special tokens -- when converted to JAX variables, JAX assumed they were `float32`; 3. The existing flax generate example is now part of our doctests, and runs.
01-04-2023 20:24:23
01-04-2023 20:24:23
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,008
closed
Make sure dynamic objects can be saved and reloaded
# What does this PR do? Fixes #20884 This PR makes sure that models that use the code on the Hub feature can be saved and repushed while still including the necessary code files. as reported in #20884, this was not the case previously. The fix is simple enough and the tests have been extended to test this use case.
01-04-2023 20:00:28
01-04-2023 20:00:28
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,007
closed
Generate: FLAX uses `GenerationConfig` as the basis for `.generate()` parametrization
# What does this PR do? Changes the FLAX side of `.generate()` such that it relies on the `GenerationConfig`. This is the FLAX equivalent of https://github.com/huggingface/transformers/pull/20388
01-04-2023 19:49:34
01-04-2023 19:49:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,006
closed
Update PR template
Adds @MKhalusova to the PR template for documentation-related issues :)
01-04-2023 18:33:08
01-04-2023 18:33:08
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,005
closed
Fix callback docstrings
Fixes #20965 where the parameters aren't properly formatted because it uses `Environment` instead of `Args` in the docstring.
01-04-2023 18:27:39
01-04-2023 18:27:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>Reformatted as Markdown πŸ‘
transformers
21,004
closed
Update bug report template
Adds @MKhalusova to the bug report template for documentation-related issues.
01-04-2023 18:09:36
01-04-2023 18:09:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,003
closed
Generate: Fix CI related to #20727
# What does this PR do? Fixes the error that showed up here: https://github.com/huggingface/transformers/actions/runs/3834635537/jobs/6527258530 Related to #20727
01-04-2023 18:04:34
01-04-2023 18:04:34
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,002
closed
Fix (DeepSpeed) docker image build issue
# What does this PR do? Currently, the docker image build job [Latest PyTorch + DeepSpeed](https://github.com/huggingface/transformers/actions/runs/3834393836/jobs/6526754769) from time to time. The issue occurs after #20788 where `apex` is recompiled during the build. It seems a resource issue (most likely the memory issue) due to the parallel build (multiple worker). So set `MAX_JOB=1` to avoid the failure. This will increase the build time to `1h30m`, but we have to build 2 same image (for daily CI and push CI), therefore 3h, and this is way too long. Previously those 2 images are built sequentially due to some issue, but now it seems the issue is gone and we can build them in parallel.
01-04-2023 17:32:21
01-04-2023 17:32:21
_The documentation is not available anymore as the PR was closed or merged._
transformers
21,001
closed
Ability to Finetune ZeoShot Text Classifier on a corpus
### Feature request I want to do zero-shot text classification for automotive parts, an candidate labels are around 3200. Now this needs to be done on basis of the Description. Using pretrained Zero-Shot models like Bart-mnli etc, are not giving me good results, as they are not having much context knowledge. It would be great to finetune ZeroShot model on all descriptions corpus, as I think this will improve results a lot. I saw this approach in ULMFit by Fastai, as they train the Language model encoder by training to predict the next word, for the whole corpus. And then they use that encoder as backbone for text classifier, and once that is fine-tuned, results are better. It can be seen here: https://docs.fast.ai/tutorial.text.html Thanks, ### Motivation This way, we can provide rich context, that is required for making classifications/label tagging. ### Your contribution I am not skilled enough, to contribute on this...
01-04-2023 17:01:03
01-04-2023 17:01:03
@m-ali-awan , I believe they are already implemented in transformers, 1. `from transformers import AutoModelForMaskedLM` for Masked Language Modelling(like your example) and 2. `from transformers import AutoModelForCausalLM` for GPT like models. <|||||>> @m-ali-awan , I believe they are already implemented in transformers, > > 1. `from transformers import AutoModelForMaskedLM` for Masked Language Modelling(like your example) and > 2. `from transformers import AutoModelForCausalLM` for GPT like models. @susnato Thanks for helping me... But, how can I format my corpus in the format of data required? <|||||>@m-ali-awan For MaskedLM you can just use the pre built class `DataCollatorForLanguageModeling` with mlm=True and mlm_probability=prob(how frequent do you want your tokens to be masked) as data_collator in Trainer class, then you can load your data using huggingface datasets and transformers will take care of all preprocessing in backend. I found a good and brief Kaggle Notebook, about this you can find it [here](https://www.kaggle.com/code/quincyqiang/maskedlm-pretrain-for-deberat-v3-large/notebook).<|||||>@susnato Thanks a lot. So, now if I want to use BART-MNLI for ZeroShot or Few Shot, I can finetune it on my corpus, in the same way, as described in the mentioned Kaggle notebook. I will do this as MaskedLM, right(or should I treat it as CausalLM)? Then, using that fine-tuned model, I can do ZeroShot, and FewShot(using around 5-10 labeled examples). Thanks again.<|||||>@m-ali-awan, Yes, but you may need to change the model based on your specific task/dataset, you can search online to see if you find a specific model which was trained for that specific type of task and for that specific type of dataset you are trying to use. You may want to go with variations of BERT for MLM at starting.<|||||>Ok thanks @susnato So, now I will got for BART-Large-MNLI, and fine tune it as MLM with my custom corpus at first. Then, will try it out for Zero-Shot or Few Shot, and same I can follow for other models... Thanks <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
21,000
closed
Remove more unused attributes in config classes
# What does this PR do?
01-04-2023 16:23:24
01-04-2023 16:23:24
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,999
closed
Refactor the function get_results
# What does this PR do? A small refactor for the function `get_results`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-04-2023 16:11:30
01-04-2023 16:11:30
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,998
closed
Fix model hub link
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Minor link fix on main README, fixes model hub link which directs to main Hugging Face page instead of models. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @stevhliu Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-04-2023 14:58:20
01-04-2023 14:58:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,997
closed
Query related to finetuning bert models for QA
When we run run_qa.py on bert-base using squad data and once again fine tune it on custom data, will it retrain all the layers? or will it train only the last layer(head) freezing the other layers ? please explain the kind of finetuning done with run_qa.py
01-04-2023 14:47:59
01-04-2023 14:47:59
Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,996
closed
Command needed to finetune bert on squad using TPU
Please provide the command to execute run_qa.py on TPU to finetune bert models.
01-04-2023 14:43:51
01-04-2023 14:43:51
Please use the [forums](https://discuss.huggingface.co/) for such questions as we keep issues for bugs and feature requests only.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,995
closed
[CLIPSeg] Fix integration test
# What does this PR do? A user reported at https://github.com/timojl/clipseg/issues/18 that CLIPSeg uses the ImageNet mean + std instead of the [-1, 1] range for normalization. This PR updates the integration test as repos on the hub were fixed.
01-04-2023 11:59:47
01-04-2023 11:59:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>Yes, it uses `ViTImageProcessor` with different settings.
transformers
20,994
closed
Generate: TF uses `GenerationConfig` as the basis for `.generate()` parametrization
# What does this PR do? Changes the TF side of `.generate()` such that it relies on the `GenerationConfig`. This is the TF equivalent of https://github.com/huggingface/transformers/pull/20388
01-04-2023 11:49:01
01-04-2023 11:49:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,993
closed
Remove cuda dependency from Deformable-DETR
# What does this PR do? Removes the CUDA dependency from Deformable DETR, [OneFormer](https://github.com/huggingface/transformers/pull/20577) and [Mask2Former](https://github.com/huggingface/transformers/pull/20792) PRs also use the same multi-scale deformable attention function and eliminate the CUDA dependency. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
01-04-2023 11:35:51
01-04-2023 11:35:51
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,992
closed
[CI-doc-daily] Remove RobertaPreLayernorm random tests
# What does this PR do? Let's wait until #20757 is merged to merge this. The checkpoints for `RobertaPreLayerNormForMaskedLM`, `RobertaPreLayerNormForQuestionAnswering`, `RobertaPreLayerNormForTokenClassification`, where not provided thus the expected values are random and should not be tested.
01-04-2023 08:59:26
01-04-2023 08:59:26
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,991
closed
Support Transformer Engine and FP8 training
### Feature request NVIDIA has proposed the FP8 tensor core along with a [Transformer Engine](https://github.com/NVIDIA/TransformerEngine) library that implements the corresponding kernels and mix precision strategies (i.e. the delay scaling strategy). I wonder if we have plan on supporting transformer engine here? This could make better use of the newest hardware. ### Motivation Make better use of the newest hardware, especially H100.
01-04-2023 04:05:26
01-04-2023 04:05:26
There is work ongoing to add support to it in [Accelerate](https://github.com/huggingface/accelerate/tree/fp8_integration) first. Once this is tested and merged, we will also port it to the `Trainer`. For now we are hit by a regression problem we are trying to fix with the team at Nvidia.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>The fp8 is now supported by Ada, and it is included in Accelerate. Will work continue on including fp8 in Trainer? @sgugger<|||||>The Trainer will soon use Accelerate, so this will come for free.<|||||>Pls update here once the trainer has supported it, thanks!<|||||>Looking forward to updates!<|||||>When all is function supported in transformers.TrainingArguments?
transformers
20,990
closed
System out of memory because of linear usage
### System Info Hi, hope this is the correct way to address my issue. When running a model to identify objects in images, memory uses keeps rising until my system can't handle any more which usually happens after +- 15 images. I think I narrowed it down to the ```python outputs = model(**inputs) ``` variable declaration, as removing this for testing purposes gets rid of the memory increases. I'll paste relevant parts of my code in the _Reproduction_ box. ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```python texts = [['an illustration of ...']] # texts has about 500 prompts it looks for processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") for partition in range(1, max_partition): # my code detects 25 images at a time, then writes the results to file detect(25, partition) # it then gets the urls from a json file and gets the images as a response, after which it does the following: def get(response): inputs = processor(text=texts, images=response, return_tensors="pt") outputs = model(**inputs) target_sizes = torch.Tensor([response.size[::-1]]) results = processor.post_process(outputs=outputs, target_sizes=target_sizes) i = 0 text = texts[i] scores, labels = results[i]["scores"], results[i]["labels"] return zip(scores, labels) ``` I hope this explains it well. If need be I can also supply the actual Python file. ### Expected behavior Not crashing my system after 2 minutes of runtime.
01-04-2023 01:13:58
01-04-2023 01:13:58
I don't know how you want us to help without a way to reproduce the error. It looks like you're not running inference within a `torch.no_grad()` context manager, which will consume memories of activations saved for the backward pass, which might be the issue. Also cc @alaradirik <|||||>Thank you for your quick reply. This is the full script: ```Python from transformers import OwlViTProcessor, OwlViTForObjectDetection import json import math import requests import torch from PIL import Image dictionary = [] def sub_question_two(part_size, part): print() print("-------------------------------------------------------") print("Processing partition " + str(part)) print("-------------------------------------------------------") global dictionary dictionary = [] with open("data.json", "r") as file_json: json_data = json.load(file_json) json_data = json_data[(part - 1) * part_size:part * part_size] for row in json_data: handler(row) write_to_file(part) def write_to_file(part): if part == 2717: open("data_subquestion_two.json", "w") with open("data_subquestion_two.json", "r+") as file_json: file_json.seek(0) json.dump(dictionary, file_json, indent=4) else: with open("data_subquestion_two.json", "r+") as file_json: file_data = json.loads(file_json.read()) file_data = file_data + dictionary file_json.seek(0) json.dump(file_data, file_json, indent=4) def handler(json_row): print(json_row["isbn"]) try: with Image.open(requests.get(json_row["library"]["cover"], stream=True).raw) as response: json_row["library"]["sq2"] = get_caption(response) except: pass try: with Image.open(requests.get(json_row["amazon"]["cover"], stream=True).raw) as response: json_row["amazon"]["sq2"] = get_caption(response) except: pass dictionary.append(json_row) def get_caption(response): inputs = processor(text=texts, images=response, return_tensors="pt") outputs = model(**inputs) target_sizes = torch.Tensor([response.size[::-1]]) results = processor.post_process(outputs=outputs, target_sizes=target_sizes) i = 0 text = texts[i] scores, labels = results[i]["scores"], results[i]["labels"] zipped = zip(scores, labels) better_zip = {} found = [] for el in zipped: if el[1] not in found: found.append(el[1]) better_zip[text[el[1]]] = round(el[0].item(), 3) else: if better_zip[text[el[1]]] < round(el[0].item(), 3): better_zip[text[el[1]]] = round(el[0].item(), 3) return dict(sorted(better_zip.items(), key=lambda item: item[1], reverse=True)) partition_size = 25 max_partition_sq2 = math.ceil(93662 / partition_size) + 1 texts = [['an illustration of a tortoise', 'an illustration of a magpie', 'an illustration of a sea turtle', 'an illustration of a general football', 'an illustration of a ambulance', 'an illustration of a ladder', 'an illustration of a toothbrush', 'an illustration of a syringe', 'an illustration of a sink', 'an illustration of a toy', 'an illustration of a organ', 'an illustration of a apple', 'an illustration of a eye', 'an illustration of a cosmetics', 'an illustration of a paddle', 'an illustration of a snowman', 'an illustration of a beer', 'an illustration of a chopsticks', 'an illustration of a beard', 'an illustration of a bird', 'an illustration of a traffic light', 'an illustration of a croissant', 'an illustration of a cucumber', 'an illustration of a radish', 'an illustration of a towel', 'an illustration of a doll', 'an illustration of a skull', 'an illustration of a washing machine', 'an illustration of a glove', 'an illustration of a belt', 'an illustration of a sunglasses', 'an illustration of a banjo', 'an illustration of a cart', 'an illustration of a ball', 'an illustration of a backpack', 'an illustration of a bike', 'an illustration of a home appliance', 'an illustration of a centipede', 'an illustration of a boat', 'an illustration of a surfboard', 'an illustration of a boot', 'an illustration of a headphones', 'an illustration of a hot dog', 'an illustration of a shorts', 'an illustration of a fast food', 'an illustration of a bus', 'an illustration of a boy', 'an illustration of a bicycle wheel', 'an illustration of a barge', 'an illustration of a laptop', 'an illustration of a miniskirt', 'an illustration of a drill', 'an illustration of a dress', 'an illustration of a bear', 'an illustration of a waffle', 'an illustration of a pancake', 'an illustration of a brown bear', 'an illustration of a woodpecker', 'an illustration of a blue jay', 'an illustration of a pretzel', 'an illustration of a bagel', 'an illustration of a tower', 'an illustration of a teapot', 'an illustration of a person', 'an illustration of a bow and arrow', 'an illustration of a swimwear', 'an illustration of a beehive', 'an illustration of a brassiere', 'an illustration of a bee', 'an illustration of a bat', 'an illustration of a starfish', 'an illustration of a popcorn', 'an illustration of a burrito', 'an illustration of a chainsaw', 'an illustration of a balloon', 'an illustration of a tent', 'an illustration of a licence plate', 'an illustration of a lantern', 'an illustration of a flashlight', 'an illustration of a billboard', 'an illustration of a tiara', 'an illustration of a limousine', 'an illustration of a necklace', 'an illustration of a carnivore', 'an illustration of a scissors', 'an illustration of a stairs', 'an illustration of a computer keyboard', 'an illustration of a printer', 'an illustration of a traffic sign', 'an illustration of a chair', 'an illustration of a shirt', 'an illustration of a poster', 'an illustration of a cheese', 'an illustration of a sock', 'an illustration of a fire hydrant', 'an illustration of a land vehicle', 'an illustration of a earrings', 'an illustration of a tie', 'an illustration of a watercraft', 'an illustration of a cabinetry', 'an illustration of a suitcase', 'an illustration of a muffin', 'an illustration of a bidet', 'an illustration of a snack', 'an illustration of a snowmobile', 'an illustration of a clock', 'an illustration of a medical equipment', 'an illustration of a cattle', 'an illustration of a cello', 'an illustration of a jet ski', 'an illustration of a camel', 'an illustration of a coat', 'an illustration of a suit', 'an illustration of a desk', 'an illustration of a cat', 'an illustration of a bronze sculpture', 'an illustration of a juice', 'an illustration of a gondola', 'an illustration of a beetle', 'an illustration of a cannon', 'an illustration of a mouse', 'an illustration of a cookie', 'an illustration of a office', 'an illustration of a fountain', 'an illustration of a coin', 'an illustration of a calculator', 'an illustration of a cocktail', 'an illustration of a computer monitor', 'an illustration of a box', 'an illustration of a christmas tree', 'an illustration of a cowboy hat', 'an illustration of a hiking equipment', 'an illustration of a studio couch', 'an illustration of a drum', 'an illustration of a dessert', 'an illustration of a wine rack', 'an illustration of a drink', 'an illustration of a zucchini', 'an illustration of a ladle', 'an illustration of a mouth', 'an illustration of a dairy', 'an illustration of a dice', 'an illustration of a oven', 'an illustration of a dinosaur', 'an illustration of a couch', 'an illustration of a cricket ball', 'an illustration of a winter melon', 'an illustration of a whiteboard', 'an illustration of a door', 'an illustration of a hat', 'an illustration of a shower', 'an illustration of a fedora', 'an illustration of a guacamole', 'an illustration of a dagger', 'an illustration of a scarf', 'an illustration of a dolphin', 'an illustration of a sombrero', 'an illustration of a tin can', 'an illustration of a mug', 'an illustration of a tap', 'an illustration of a harbor seal', 'an illustration of a stretcher', 'an illustration of a goggles', 'an illustration of a human body', 'an illustration of a roller skates', 'an illustration of a coffee cup', 'an illustration of a cutting board', 'an illustration of a blender', 'an illustration of a plumbing fixture', 'an illustration of a stop sign', 'an illustration of a office supplies', 'an illustration of a volleyball', 'an illustration of a vase', 'an illustration of a slow cooker', 'an illustration of a wardrobe', 'an illustration of a coffee', 'an illustration of a paper towel', 'an illustration of a personal care', 'an illustration of a food', 'an illustration of a sun hat', 'an illustration of a tree house', 'an illustration of a skirt', 'an illustration of a gas stove', 'an illustration of a salt and pepper shakers', 'an illustration of a mechanical fan', 'an illustration of a fruit', 'an illustration of a french fries', 'an illustration of a nightstand', 'an illustration of a barrel', 'an illustration of a kite', 'an illustration of a tart', 'an illustration of a treadmill', 'an illustration of a fox', 'an illustration of a flag', 'an illustration of a horn', 'an illustration of a window blind', 'an illustration of a foot', 'an illustration of a golf cart', 'an illustration of a jacket', 'an illustration of a egg', 'an illustration of a street light', 'an illustration of a guitar', 'an illustration of a pillow', 'an illustration of a leg', 'an illustration of a isopod', 'an illustration of a grape', 'an illustration of a ear', 'an illustration of a power plugs and sockets', 'an illustration of a panda', 'an illustration of a giraffe', 'an illustration of a woman', 'an illustration of a door handle', 'an illustration of a rhinoceros', 'an illustration of a bathtub', 'an illustration of a goldfish', 'an illustration of a houseplant', 'an illustration of a goat', 'an illustration of a baseball bat', 'an illustration of a baseball glove', 'an illustration of a mixing bowl', 'an illustration of a marine invertebrates', 'an illustration of a kitchen utensil', 'an illustration of a light switch', 'an illustration of a house', 'an illustration of a horse', 'an illustration of a stationary bicycle', 'an illustration of a ceiling fan', 'an illustration of a sofa bed', 'an illustration of a harp', 'an illustration of a sandal', 'an illustration of a bicycle helmet', 'an illustration of a saucer', 'an illustration of a harpsichord', 'an illustration of a hair', 'an illustration of a hamster', 'an illustration of a curtain', 'an illustration of a bed', 'an illustration of a kettle', 'an illustration of a fireplace', 'an illustration of a scale', 'an illustration of a drinking straw', 'an illustration of a insect', 'an illustration of a invertebrate', 'an illustration of a food processor', 'an illustration of a bookcase', 'an illustration of a refrigerator', 'an illustration of a wood-burning stove', 'an illustration of a punching bag', 'an illustration of a common fig', 'an illustration of a jaguar', 'an illustration of a golf ball', 'an illustration of a fashion accessory', 'an illustration of a alarm clock', 'an illustration of a filing cabinet', 'an illustration of a artichoke', 'an illustration of a table', 'an illustration of a tableware', 'an illustration of a kangaroo', 'an illustration of a koala', 'an illustration of a knife', 'an illustration of a bottle', 'an illustration of a lynx', 'an illustration of a lavender', 'an illustration of a lighthouse', 'an illustration of a dumbbell', 'an illustration of a head', 'an illustration of a bowl', 'an illustration of a porch', 'an illustration of a lizard', 'an illustration of a billiard table', 'an illustration of a mammal', 'an illustration of a mouse', 'an illustration of a motorcycle', 'an illustration of a musical instrument', 'an illustration of a swim cap', 'an illustration of a frying pan', 'an illustration of a snowplow', 'an illustration of a bathroom cabinet', 'an illustration of a missile', 'an illustration of a bust', 'an illustration of a man', 'an illustration of a milk', 'an illustration of a plate', 'an illustration of a mobile phone', 'an illustration of a baked goods', 'an illustration of a mushroom', 'an illustration of a pitcher', 'an illustration of a mirror', 'an illustration of a lifejacket', 'an illustration of a table tennis racket', 'an illustration of a musical keyboard', 'an illustration of a scoreboard', 'an illustration of a briefcase', 'an illustration of a kitchen knife', 'an illustration of a tennis ball', 'an illustration of a plastic bag', 'an illustration of a oboe', 'an illustration of a chest of drawers', 'an illustration of a ostrich', 'an illustration of a piano', 'an illustration of a girl', 'an illustration of a plant', 'an illustration of a potato', 'an illustration of a sports equipment', 'an illustration of a pasta', 'an illustration of a penguin', 'an illustration of a pumpkin', 'an illustration of a pear', 'an illustration of a infant bed', 'an illustration of a polar bear', 'an illustration of a mixer', 'an illustration of a cupboard', 'an illustration of a jacuzzi', 'an illustration of a pizza', 'an illustration of a digital clock', 'an illustration of a pig', 'an illustration of a reptile', 'an illustration of a rifle', 'an illustration of a lipstick', 'an illustration of a skateboard', 'an illustration of a raven', 'an illustration of a high heels', 'an illustration of a red panda', 'an illustration of a rose', 'an illustration of a rabbit', 'an illustration of a sculpture', 'an illustration of a saxophone', 'an illustration of a shotgun', 'an illustration of a seafood', 'an illustration of a submarine sandwich', 'an illustration of a snowboard', 'an illustration of a sword', 'an illustration of a picture frame', 'an illustration of a sushi', 'an illustration of a loveseat', 'an illustration of a ski', 'an illustration of a squirrel', 'an illustration of a tripod', 'an illustration of a stethoscope', 'an illustration of a submarine', 'an illustration of a scorpion', 'an illustration of a segway', 'an illustration of a bench', 'an illustration of a snake', 'an illustration of a coffee table', 'an illustration of a skyscraper', 'an illustration of a sheep', 'an illustration of a television', 'an illustration of a trombone', 'an illustration of a tea', 'an illustration of a tank', 'an illustration of a taco', 'an illustration of a telephone', 'an illustration of a tiger', 'an illustration of a strawberry', 'an illustration of a trumpet', 'an illustration of a tree', 'an illustration of a tomato', 'an illustration of a train', 'an illustration of a tool', 'an illustration of a picnic basket', 'an illustration of a trousers', 'an illustration of a bowling equipment', 'an illustration of a football helmet', 'an illustration of a truck', 'an illustration of a coffeemaker', 'an illustration of a violin', 'an illustration of a vehicle', 'an illustration of a handbag', 'an illustration of a wine', 'an illustration of a weapon', 'an illustration of a wheel', 'an illustration of a worm', 'an illustration of a wok', 'an illustration of a whale', 'an illustration of a zebra', 'an illustration of a auto part', 'an illustration of a jug', 'an illustration of a cream', 'an illustration of a monkey', 'an illustration of a lion', 'an illustration of a bread', 'an illustration of a platter', 'an illustration of a chicken', 'an illustration of a eagle', 'an illustration of a helicopter', 'an illustration of a owl', 'an illustration of a duck', 'an illustration of a turtle', 'an illustration of a hippopotamus', 'an illustration of a crocodile', 'an illustration of a toilet', 'an illustration of a toilet paper', 'an illustration of a squid', 'an illustration of a clothing', 'an illustration of a footwear', 'an illustration of a lemon', 'an illustration of a spider', 'an illustration of a deer', 'an illustration of a frog', 'an illustration of a banana', 'an illustration of a rocket', 'an illustration of a wine glass', 'an illustration of a countertop', 'an illustration of a tablet computer', 'an illustration of a waste container', 'an illustration of a swimming pool', 'an illustration of a dog', 'an illustration of a book', 'an illustration of a elephant', 'an illustration of a shark', 'an illustration of a candle', 'an illustration of a leopard', 'an illustration of a porcupine', 'an illustration of a flower', 'an illustration of a canary', 'an illustration of a cheetah', 'an illustration of a palm tree', 'an illustration of a hamburger', 'an illustration of a maple', 'an illustration of a building', 'an illustration of a fish', 'an illustration of a lobster', 'an illustration of a asparagus', 'an illustration of a furniture', 'an illustration of a hedgehog', 'an illustration of a airplane', 'an illustration of a spoon', 'an illustration of a otter', 'an illustration of a bull', 'an illustration of a oyster', 'an illustration of a convenience store', 'an illustration of a bench', 'an illustration of a ice cream', 'an illustration of a caterpillar', 'an illustration of a butterfly', 'an illustration of a parachute', 'an illustration of a orange', 'an illustration of a antelope', 'an illustration of a moths and butterflies', 'an illustration of a window', 'an illustration of a closet', 'an illustration of a castle', 'an illustration of a jellyfish', 'an illustration of a goose', 'an illustration of a mule', 'an illustration of a swan', 'an illustration of a peach', 'an illustration of a seat belt', 'an illustration of a raccoon', 'an illustration of a fork', 'an illustration of a lamp', 'an illustration of a camera', 'an illustration of a squash', 'an illustration of a racket', 'an illustration of a face', 'an illustration of a arm', 'an illustration of a vegetable', 'an illustration of a unicycle', 'an illustration of a falcon', 'an illustration of a snail', 'an illustration of a shellfish', 'an illustration of a cabbage', 'an illustration of a carrot', 'an illustration of a mango', 'an illustration of a jeans', 'an illustration of a flowerpot', 'an illustration of a pineapple', 'an illustration of a drawer', 'an illustration of a stool', 'an illustration of a envelope', 'an illustration of a cake', 'an illustration of a dragonfly', 'an illustration of a sunflower', 'an illustration of a microwave oven', 'an illustration of a honeycomb', 'an illustration of a marine mammal', 'an illustration of a sea lion', 'an illustration of a ladybug', 'an illustration of a shelf', 'an illustration of a watch', 'an illustration of a candy', 'an illustration of a salad', 'an illustration of a parrot', 'an illustration of a handgun', 'an illustration of a sparrow', 'an illustration of a van', 'an illustration of a spice rack', 'an illustration of a light bulb', 'an illustration of a corded phone', 'an illustration of a sports uniform', 'an illustration of a tennis racket', 'an illustration of a wall clock', 'an illustration of a serving tray', 'an illustration of a kitchen & dining room table', 'an illustration of a dog bed', 'an illustration of a cake stand', 'an illustration of a bathroom accessory', 'an illustration of a kitchen appliance', 'an illustration of a tire', 'an illustration of a ruler', 'an illustration of a luggage and bags', 'an illustration of a microphone', 'an illustration of a broccoli', 'an illustration of a umbrella', 'an illustration of a pastry', 'an illustration of a grapefruit', 'an illustration of a animal', 'an illustration of a bell pepper', 'an illustration of a turkey', 'an illustration of a lily', 'an illustration of a pomegranate', 'an illustration of a doughnut', 'an illustration of a glasses', 'an illustration of a nose', 'an illustration of a pen', 'an illustration of a ant', 'an illustration of a car', 'an illustration of a aircraft', 'an illustration of a hand', 'an illustration of a teddy bear', 'an illustration of a watermelon', 'an illustration of a cantaloupe', 'an illustration of a dishwasher', 'an illustration of a flute', 'an illustration of a balance beam', 'an illustration of a sandwich', 'an illustration of a shrimp', 'an illustration of a sewing machine', 'an illustration of a binoculars', 'an illustration of a rays and skates', 'an illustration of a ipod', 'an illustration of a accordion', 'an illustration of a willow', 'an illustration of a crab', 'an illustration of a crown', 'an illustration of a seahorse', 'an illustration of a perfume', 'an illustration of a alpaca', 'an illustration of a taxi', 'an illustration of a canoe', 'an illustration of a remote control', 'an illustration of a wheelchair', 'an illustration of a rugby ball', 'an illustration of a helmet']] processor = OwlViTProcessor.from_pretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.from_pretrained("google/owlvit-base-patch32") for partition in range(2717, max_partition_sq2): sub_question_two(partition_size, partition) ``` data.json contains the urls to the images. I will look into torch.no_grad().<|||||>Quick update. It looks like the ```torch.no_grad()``` function did the trick! Adding it after the ```get_caption(response)``` call made it so the memory usage remains stable at around 12G memory usage. I can't thank you enough, I've been up trying to fix this since I created this issue and will now sleep.
transformers
20,989
closed
Fix race condition on cleaning checkpoints when save_total_limit set to 1
# What does this PR do? This PR fixes #20988 by testing whether the worker process is allowed to save (`self.args.should_save` is set to True). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #20988 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. - trainer: @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-03-2023 17:57:19
01-03-2023 17:57:19
_The documentation is not available anymore as the PR was closed or merged._<|||||>It's you to thank @sgugger for you quick review. I've just fixed the style and pushed.
transformers
20,988
closed
[Multi-node setup] Race condition on deleting checkpoint when using shared filesystem and save_total_limit=1
### System Info When running training on multi-node setup with a shared filesystem (shared PVC on Kubernetes). W use the following configuration (Full example on Reproduction section) : ```python load_best_model_at_end=True, save_on_each_node=False, save_total_limit=1, ``` When the training is finished over all epochs, it fails with FileNotFoundError with random file. It seems all the workers are trying to delete the same files when we set `save_total_limit=1`. This is causing whole training script to fail: ```bash FileNotFoundError: [Errno 2] No such file or directory: 'rng_state_1.pth' ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 7796) ... torch.distributed.elastic.multiprocessing.errors.ChildFailedError: `` ### Who can help? @sgugger ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I created the following python script `trainer_bug.py`, it runs **GLUE** `cola` training task on a small sample of data: ```python # pip install transformers==4.25.1 datasets==2.8.0 torch==1.13.1 scipy scikit-learn import numpy as np from datasets import load_dataset, load_metric from transformers import AutoTokenizer from transformers import AutoModelForSequenceClassification, TrainingArguments, Trainer task = "cola" model_checkpoint = "distilbert-base-uncased" num_labels = 2 batch_size = 2 metric_name = "matthews_correlation" validation_key = "validation" SAMPLE_N_ROWS = 10 if __name__ == "__main__": dataset = load_dataset("glue", task) for split in dataset: dataset[split] = dataset[split].select(range(SAMPLE_N_ROWS)) metric = load_metric('glue', task) tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True) def preprocess_function(examples): return tokenizer(examples["sentence"], truncation=True) def compute_metrics(eval_pred): predictions, labels = eval_pred predictions = np.argmax(predictions, axis=1) return metric.compute(predictions=predictions, references=labels) encoded_dataset = dataset.map(preprocess_function, batched=True) model = AutoModelForSequenceClassification.from_pretrained(model_checkpoint, num_labels=num_labels) model_name = model_checkpoint.split("/")[-1] args = TrainingArguments( f"{model_name}-finetuned-{task}", evaluation_strategy="epoch", save_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, num_train_epochs=3, weight_decay=0.01, report_to="none", metric_for_best_model=metric_name, overwrite_output_dir=True, load_best_model_at_end=True, log_on_each_node=False, save_on_each_node=False, save_total_limit=1, # For a distributed CPU setup no_cuda=True, xpu_backend="gloo", ) trainer = Trainer( model, args, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset[validation_key], tokenizer=tokenizer, compute_metrics=compute_metrics ) trainer.train() ``` And then run it with this script `trainer_bug.sh` to simulate 2 nodes setup on CPUs: ```bash WORLD_SIZE=2 PROC_PER_NODE=1 MASTER_HOSTNAME=localhost MASTER_PORT=12345 # Run worker RANK=1 CUDA_VISIBLE_DEVICES="" torchrun --nnodes=$WORLD_SIZE --nproc_per_node=$PROC_PER_NODE \ --node_rank=$RANK --master_addr=$MASTER_HOSTNAME \ --master_port=$MASTER_PORT \ trainer_bug.py & # Run master RANK=0 CUDA_VISIBLE_DEVICES="" torchrun --nnodes=$WORLD_SIZE --nproc_per_node=$PROC_PER_NODE \ --node_rank=$RANK --master_addr=$MASTER_HOSTNAME \ --master_port=$MASTER_PORT \ trainer_bug.py ``` ### Expected behavior The training is expected to finish successfully. However it fails with the following stack trace: ```bash Loading best model from distilbert-base-uncased-finetuned-cola/checkpoint-3 (score: 0.0). {'train_runtime': 24.6088, 'train_samples_per_second': 1.219, 'train_steps_per_second': 0.366, 'train_loss': 0.5689484278361002, 'epoch': 3.0}{'train_runtime': 24.6164, 'train_samples_per_second': 1.219, 'train_steps_per_second': 0.366, 'train_loss': 0.5813997056749132, 'epoch': 3.0} 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 9/9 [00:24<00:00, 1.83s/it] Deleting older checkpoint [distilbert-base-uncased-finetuned-cola/checkpoint-9] due to args.save_total_limit 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 9/9 [00:24<00:00, 2.74s/it] Traceback (most recent call last): File "trainer_bug.py", line 66, in <module> trainer.train() File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/transformers/trainer.py", line 1527, in train return inner_training_loop( File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/transformers/trainer.py", line 1920, in _inner_training_loop shutil.rmtree(checkpoint) File "/home/XXX/.pyenv/versions/3.8.13/lib/python3.8/shutil.py", line 718, in rmtree _rmtree_safe_fd(fd, path, onerror) File "/home/XXX/.pyenv/versions/3.8.13/lib/python3.8/shutil.py", line 675, in _rmtree_safe_fd onerror(os.unlink, fullname, sys.exc_info()) File "/home/XXX/.pyenv/versions/3.8.13/lib/python3.8/shutil.py", line 673, in _rmtree_safe_fd os.unlink(entry.name, dir_fd=topfd) FileNotFoundError: [Errno 2] No such file or directory: 'rng_state_1.pth' ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 7796) of binary: /home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/bin/python Traceback (most recent call last): File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/torch/distributed/run.py", line 762, in main run(args) File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/torch/distributed/run.py", line 753, in run elastic_launch( File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/XXX/.cache/pypoetry/virtualenvs/XXX-training-zu6czGQ--py3.8/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ trainer_bug.py FAILED ------------------------------------------------------------ Failures: <NO_OTHER_FAILURES> ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2023-01-03_18:28:49 host : XXXXXX rank : 1 (local_rank: 0) exitcode : 1 (pid: 7796) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================ ```
01-03-2023 17:56:27
01-03-2023 17:56:27
transformers
20,987
closed
Hugging Face Dies Silently when Memory insufficient for loading Model / Training Model
Currently, when you load a model into memory that is too large or if you try to train a model with insufficient memory. The process gets killed without an error message. It's a bit tough to track down what is going on as a result. I'm wondering if you can add an error message similar to pytorch when you have insufficient memory to run a given process?
01-03-2023 15:54:17
01-03-2023 15:54:17
If you have insufficient GPU memory, you will get the PyTorch error. For RAM issues, I don't think there is anything that exists to issue the same errors.<|||||>I was running on CPU. I know I've gotten the pytorch errors on GPU. If nothing exists that's alright. Just thought it would be nice to get an error message so you could more easily see what was going on, particularly when you're just loading a model for inferencing, which is often done on cpu.
transformers
20,986
closed
Fix for LXMERT
# What does this PR do? While continuing to remove unused attributes in config classes, it seems `LxmertConfig.visual_feat_loss` is not used by mistake.
01-03-2023 15:52:33
01-03-2023 15:52:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,985
closed
Added mask_time_prob and mask_time_length arguments to wav2vec2 pretraining script
This PR relates to [PR 19997](https://github.com/huggingface/transformers/pull/19997), in which I messed up the PR by forgetting the --force flag when pushing. Hopefully this PR is correctly performed. @sanchit-gandhi @sgugger @patrickvonplaten
01-03-2023 15:19:58
01-03-2023 15:19:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>@sanchit-gandhi
transformers
20,984
closed
Ignore errors when deleting old checkpoints in trainer
# What does this PR do? Fixes #17265 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
01-03-2023 14:54:40
01-03-2023 14:54:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,983
closed
Add DETA
# What does this PR do? This PR adds [DETA](https://github.com/jozhang97/DETA/issues/3). DETA is a slight change to Deformable DETR by using traditional IoU-based assignment as opposed to the Hungarian matching used in the original DETR, and incorporating NMS (non-maximum suppression) in the postprocessing. Note: this model has a `torchvision` dependency for NMS. To do: - [x] transfer checkpoints
01-03-2023 13:59:53
01-03-2023 13:59:53
cc @alaradirik this PR is in a ready state, except for 2 things: - [x] whether or not we leverage torchvision's `batched_nms` => the CI is currently failing because this library is not installed. Will also ask for @sgugger and @LysandreJik's opinion here - [ ] the `post_process_object_detection` method might require an in-depth look<|||||>There is no problem with the model requiring torchvision to be installed. We have many models with specific dependencies, some of which you ported yourself ;-). Just protect the import between `if is_torchvision_available()` and have a the first line in the init of the models be a `require_backends(["torchvision"])`.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger I've addressed all comments, except for adding support for the custom kernel. Could we perhaps add support for the custom kernel for the 3 models (Mask2Former, OneFormer and DETA) in a separate PR?<|||||>In this case, remove the code trying to load the custom kernels in the modeling file and we can add it back in the PR that will deal with custom kernels.<|||||>@sgugger ok, feel free to approve :)<|||||>Failing test is unrelated/flaky, merging.
transformers
20,982
closed
[WIP] Avoid Null CI
# What does this PR do? [WIP] Avoid Null CI
01-03-2023 13:52:32
01-03-2023 13:52:32
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20982). All of your documentation changes will be reflected on that endpoint.
transformers
20,981
closed
Avoid CI runs under users' own CircleCI personal account
# What does this PR do? Sometimes the tests will run in the CircleCI of the user and not run our tests since they don't have access to our resources. One example is on [this PR](https://github.com/huggingface/transformers/pull/20479#issuecomment-1369690668) where the "real" tests were not run. **We can make the new job `check_circleci_user` required (too) - once this PR is merged into `main`**
01-03-2023 13:47:39
01-03-2023 13:47:39
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20981). All of your documentation changes will be reflected on that endpoint.
transformers
20,980
closed
Improve OWL-ViT postprocessing
# What does this PR do? - Adds post_process_object_detection method to OWL-ViT with the same functionality as other object detection post-processing methods (thresholding, different target sizes for each image in the batch). - Updates the zero-shot-object-detection pipeline to use the new method ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
01-03-2023 13:26:40
01-03-2023 13:26:40
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,979
closed
Improve OWL-ViT postprocessing
# What does this PR do? - Adds post_process_object_detection method to OWL-ViT with the same functionality as other object detection post-processing methods (thresholding, different target sizes for each image in the batch). - Updates the zero-shot-object-detection pipeline to use the new method ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
01-03-2023 13:20:23
01-03-2023 13:20:23
transformers
20,978
closed
Adding Support for Mixed Precision in Accelerator
There's a bug in the code that, we've got `accelerator.use_fp16` but the accelerator.use_fp16 flag can never be `True` because we didn't pass it in. I've added the support by passing in the fp16 flag. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-03-2023 13:14:15
01-03-2023 13:14:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_20978). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,977
closed
Fix post_process_object_detection method descriptions
# What does this PR do? Fixes the descriptions of all effected model methods (post_process_object_detection and deprecated post_process methods) that inaccurately state the methods returns bounding boxes in the format expected by the COCO API (x_center, y_center, w, h) format instead of the (x1, y1, x2, y2) format. I will open a separate PR to add an option to return the bounding boxes in the COCO API format. ## Before submitting - [ X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
01-03-2023 11:22:06
01-03-2023 11:22:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,976
closed
Exclude the madeup words from M2M100Tokenizer.vocab_size
# What does this PR do? The `<unk>` token has an incorrect ID in `M2M100Tokenizer.get_vocab`: ```python >>> tokenizer = transformers.M2M100Tokenizer.from_pretrained("facebook/m2m100_418M") >>> tokenizer.convert_tokens_to_ids("<unk>") 3 >>> tokenizer.get_vocab()["<unk>"] 128111 ``` The reason is the vocabulary is defined like this: ``` vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} ``` but `_convert_id_to_token` converts the "madeup words" to `<unk>`. We can fix this issue by excluding the "madeup words" from the vocabulary size, which is consistent with how other tokenizers work such as `NllbTokenizer`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @ArthurZucker
01-03-2023 10:26:08
01-03-2023 10:26:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @ArthurZucker, do you have some time to review this PR? Thanks!
transformers
20,975
closed
Fix type casting in compute_segments
# What does this PR do? Fix bug in `feature_extraction_detr.py` compute_segments function: make sure the shape is integer (it could be float) If the shape is float, there will be an error message: ``` zeros() received an invalid combination of arguments - got (tuple, device=torch.device, dtype=torch.dtype), but expected one of: * (tuple of ints size, *, tuple of names names, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) * (tuple of ints size, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-03-2023 08:47:58
01-03-2023 08:47:58
_The documentation is not available anymore as the PR was closed or merged._
transformers
20,974
closed
Add perf numbers for perf_train_cpu
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> As mentioned in https://github.com/huggingface/transformers/pull/17138, we are adding some perf numbers in the doc. cc @sywangyi @liangan1 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts and @NielsRogge - speech models: @sanchit-gandhi Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger and @stevhliu HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
01-03-2023 08:11:01
01-03-2023 08:11:01
@sgugger Could you please review this PR? Thanks!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @sgugger, For the absolute time number to be shown as public in the doc, can we directly mention and link this [public blog](https://huggingface.co/blog/intel-sapphire-rapids) for a reference in this perf_train_cpu doc instead of providing ours? (to simplify this work and avoid extra internal review procedures) This public blog guides users to run through CPU training and provides realistic results, which is a very practical reference. Thanks. <|||||>Yes, we can definitely link to the blog post instead of the picture.<|||||>> Yes, we can definitely link to the blog post instead of the picture. Hi, @sgugger Have refined this PR to add the link to this blog post as a practice example in the doc. Thanks!<|||||>Thanks again for your contribution!
transformers
20,973
closed
Pipeline to support batch inference
### Feature request Thank you for the awesome framework! For my work I wanted to use `transformers.pipelines.token_classification.TokenClassificationPipeline` in batch mode, since it is much faster on GPU, but I wanted to keep all the functionality for grouping entities. So I want to suggest something like this: ``` nlp = pipeline("ner", model=model, tokenizer=tokenizer, device = 0 if torch.cuda.is_available() else -1, aggregation_strategy="average", batch_size=16) ``` ### Motivation I implemented it for myself and think it would be cool to have this functionality "out-of-the-box" for community to enjoy the speed up. (And it really gives a huge speed up) ### Your contribution I am willing to contribute and implement this change for TokenClassification task (also for TextClassification, FeatureExtraction should be pretty much same). Have not worked with other pipelines, so not sure how batching is implemented there, but I am willing to try and contribute.
01-03-2023 07:56:38
01-03-2023 07:56:38
cc @Narsil <|||||>Hi @maiiabocharova doesn't this work already out of the box? ```python import torch from transformers import pipeline pipe = pipeline( "ner", device=0 if torch.cuda.is_available() else -1, aggregation_strategy="average", batch_size=16, ) original_fn = pipe.model.forward COUNT = 0 def new_forward(*args, **kwargs): global COUNT COUNT += 1 return original_fn(*args, **kwargs) pipe.model.forward = new_forward def data(): for i in range(20): yield "I live in New york" for out in pipe(data()): print(out) print(f"Forward called {COUNT} times") ``` This works, no ?<|||||>Sorry, probably I was looking into wrong source code ```python for i, sentence in enumerate(_inputs): # Manage correct placement of the tensors with self.device_placement(): tokens = self.tokenizer( sentence, return_attention_mask=False, return_tensors=self.framework, truncation=True, return_special_tokens_mask=True, return_offsets_mapping=self.tokenizer.is_fast, ) if self.tokenizer.is_fast: offset_mapping = tokens.pop("offset_mapping").cpu().numpy()[0] elif offset_mappings: offset_mapping = offset_mappings[i] else: offset_mapping = None special_tokens_mask = tokens.pop("special_tokens_mask").cpu().numpy()[0] ``` But actually when I modified this part into ```python for start_index in range(0, len(sentences), batch_size): sentences_batch = sentences[start_index:start_index+batch_size] with self.device_placement(): tokens = self.tokenizer( sentences_batch, return_attention_mask=False, return_tensors=self.framework, truncation=True, padding='longest', return_special_tokens_mask=True, return_offsets_mapping=self.tokenizer.is_fast, ) if self.tokenizer.is_fast: offset_mapping_batch = tokens.pop("offset_mapping").cpu().numpy() special_tokens_mask_batch = tokens.pop("special_tokens_mask").cpu().numpy() with torch.no_grad(): tokens = self.ensure_tensor_on_device(**tokens) entities_batch = self.model(**tokens)[0].cpu().numpy() input_ids_batch = tokens["input_ids"].cpu().numpy() scores_batch = np.exp(entities_batch) / np.exp(entities_batch).sum(-1, keepdims=True) ``` Pipeline started working 3x faster P.S. Yes, you are right! I am sorry, maybe I was using also the old version of the library. Sorry once again!<|||||>Maybe an older version indeed. Also the batching mecanism is not really transparent in the pipeline code, it's meant to be relatively orthogonal (because making it explicit had too many drawbacks, like code duplication, and it was really hard to support more complex use cases).
transformers
20,972
closed
Some issues on summarization example
@patil-suraj Thanks for the beautiful example in `main/examples/pytorch/summarization`. I do, however, have the following issues with the `run_summarization_no_trainer.py` file. 1. The flag `--max_length` seems not used. 2. The checking `check_min_version("4.26.0.dev0")` seems ahead of the current release of `4.25.1` 3. This file does not have the flags no `test_file`, `max_train/eval/predict_samples`, while `run_summarization.py` has. 4. Also it will be helpful to add the functionality of gradient norm clipping. Thanks for your help.
01-03-2023 06:12:56
01-03-2023 06:12:56
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
20,971
closed
[run_clm example] add torch_dtype option for model load.
for BLOOM 175B model. peak memory will reduce about 350G for inference. the weight of BLOOM in model hub is bfloat16 Signed-off-by: Wang, Yi A <[email protected]> # What does this PR do? reduce the peak memory for BLOOM inference. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? - trainer: @sgugger
01-03-2023 06:05:48
01-03-2023 06:05:48
@yao-matrix @jiqing-feng @sgugger please help to review<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger done, add other torch dtypes per review comment
transformers
20,970
closed
Make the attention_head_size in distilbert an object attribute
# What does this PR do? It simply moves the attention_head_size in the distilbert model to be an object attribute. This is necessary if you want to use the Distilbert model in the nn_pruning library. It will also benefit anyone who ever needs to access the attention_head_size attribute from an instance of a Distilbert model. This change is consistent with other transformer models in this library (see BERT https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_bert.py#L253 or BART https://github.com/huggingface/transformers/blob/main/src/transformers/models/bart/modeling_bart.py#L157)
01-03-2023 04:16:18
01-03-2023 04:16:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Thanks for your PR. Could you just run `make style` on your branch to fix the quality issue? Hi @sgugger, thanks for the quick approval. Just fixed the code style<|||||>Thanks again for your contribution!